Master your next Typescript interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.
Prepare for your Typescript interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.
Thousands of mentors available
Flexible program structures
Free trial
Personal chats
1-on-1 calls
97% satisfaction rate
Choose your preferred way to study these interview questions
strict matters because it turns TypeScript from "helpful hints" into real compile time safety. It catches bad assumptions early, especially around null, undefined, weak typing, and unsafe object access. In teams, it also creates consistent standards, so code reviews focus less on obvious bugs and more on design.
The options I value most are:
- strictNullChecks, forces you to handle missing values explicitly.
- noImplicitAny, prevents silent loss of type safety.
- strictFunctionTypes, catches unsafe callback and function assignments.
- noUncheckedIndexedAccess, makes array and map lookups safer.
- exactOptionalPropertyTypes, distinguishes "missing" from "present but undefined".
- noImplicitOverride, protects inheritance-heavy code from accidental mismatches.
If I had to pick only two, I would start with strictNullChecks and noImplicitAny, because they eliminate a huge class of real production bugs fast.
TypeScript mainly helps you scale code and teams without losing confidence.
In large systems, the biggest win is not just fewer bugs, it is maintainability. JavaScript works fine, but TypeScript gives teams guardrails that matter more as the app and headcount grow.
I’d frame it as JavaScript with a compile-time safety net. Your code still runs as normal JavaScript, but TypeScript checks shapes, inputs, and outputs before runtime, so you catch mistakes earlier and get better editor help.
string, number, arrays, objects, and function signatures.tsconfig rules and tighten over time.I’d also mention unions, narrowing, generics, and any vs unknown as the next concepts to learn.
Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.
I’d treat it like risk reduction, not a big-bang rewrite. The goal is to raise safety where changes are already happening, while keeping teams shipping.
allowJs or permissive mode first, then tighten flags in stages like noImplicitAny, strictNullChecks, and finally strict.unknown instead of any for unsafe inputs, then narrow with type guards or runtime validation tools like Zod.In practice, I’d pair this with CI rules for new code, but avoid forcing old untouched areas to comply immediately.
I usually model API responses in layers, not as one big trusted type. The transport shape is loose, then I validate and map it into a stricter domain type the app actually uses.
unknown at the boundary, not any, then parse with Zod, io-ts, or custom type guards.{ status: "ok" | "error" }.OldShape | NewShape during migration.In practice, I’ve used an adapter layer in the data client, which let the UI stay stable even while the backend changed field names and nesting.
Both describe shapes in TypeScript, but they shine in slightly different places.
interface is best for object contracts, especially public APIs and class implementations.type is more flexible, it can represent unions, intersections, primitives, tuples, mapped types, and conditional types.type cannot be declaration-merged the way interfaces can.My rule of thumb: use interface for straightforward object shapes that may be extended or implemented, and use type when I need composition or advanced type features like A | B, keyof, or tuple transformations. In many codebases, consistency matters more than the technical difference for simple objects.
TypeScript uses structural typing, which means compatibility is based on an object's shape, not its explicit type name. If two types have the same required properties with compatible property types, one can be assigned to the other, even if they were declared separately.
type A = { name: string } and type B = { name: string; age: number }, B is assignable to AB has at least the structure A requiresA is missing age{ name: "x", age: 1 } to a function expecting { name: string } is fine, but assigning { name: "x", age: 1, extra: true } to a narrower literal type can be flagged in some contextsThis makes TS flexible, but it can also allow accidental compatibility when two unrelated types happen to share the same shape.
Union types let a value be one of several types, like string | number. They’re useful when data can legitimately come in different shapes, and you narrow it before using it. A practical example is an API response state: type Result = { data: User } | { error: string }. Then you check which property exists and handle success or failure safely.
Intersection types combine multiple types into one, like A & B, meaning the value must satisfy both. They’re great for composing behaviors or shared fields. A common use case is extending domain data with metadata, for example User & { lastLogin: Date } or combining BaseEntity & Auditable so an object has both core fields and audit fields without duplicating type definitions.
Get personalized mentor recommendations based on your goals and experience level
Start matchingGenerics let you write one abstraction that works across many types while preserving the relationship between inputs and outputs. That improves reusability because you avoid copy-pasting similar logic for User, Order, Product, and so on. It improves type safety because TypeScript can enforce, for example, that if you pass in User[], you get back User[], not any[], and that required properties exist when you constrain a type like T extends { id: string }.
One abstraction I built was a generic data table hook, useTable<T>. It handled sorting, filtering, pagination, and row selection for any entity type. I constrained T so consumers could optionally provide strongly typed column definitions like keyof T, and row IDs via getRowId: (row: T) => string. That gave teams one reusable hook across multiple screens, while keeping column access and callbacks fully type-safe.
They’re core tools for turning runtime-shaped data into safe TypeScript APIs.
keyof gives you the union of property names of a type, like keyof User becoming "id" | "email".typeof captures the type of a value, useful when you want types derived from real constants or config objects.User["email"] or User[keyof User], let you pull out property value types from existing types.In real projects, I’ve used them a lot for form builders, API clients, and config-driven UI. Example: a column config object for a data table. I’d use typeof columns to derive the config type, keyof Row to restrict column keys to valid fields, and Row[K] in generics so each renderer got the correct value type. That cuts duplication and catches mismatches at compile time.
Conditional types let a type branch based on another type, kind of like a type-level if. The basic shape is T extends U ? X : Y. They’re great when a function or API changes shape depending on input, and you want TypeScript to enforce that relationship automatically.
A common example is an API client where response data depends on options. If includeMeta is true, return Data & Meta; otherwise just Data. I’d model that with something like Response<T extends boolean> = T extends true ? DataWithMeta : Data. I used this pattern for a query builder where selected fields determined the result shape. Instead of returning broad any-like objects, conditional types mapped the selected config to an exact payload type, which improved autocomplete and caught mismatches at compile time.
Conditional types let you say, “if T matches this shape, return one type, otherwise return another.” infer is the extraction tool inside that match. Example: type Elem<T> = T extends (infer U)[] ? U : T pulls out an array’s item type. You’ll also see it in function helpers like extracting return types, parameter tuples, or the resolved type of a Promise.
Distributive conditional types happen when the checked type is a naked type parameter, like T extends X ? A : B. If T is A | B, TypeScript applies the condition to each member separately and unions the results. That’s useful for filtering unions, transforming API response variants, unwrapping nested types, building utility types like Exclude, Extract, NonNullable, or creating pattern-matching style type logic for safer libraries.
Type narrowing is TypeScript refining a variable from a broad type like string | number to a more specific type based on control flow. It happens when your runtime checks give the compiler enough evidence that only part of the union is possible in that branch.
typeof checks, like typeof x === "string", narrow primitives.instanceof narrows class or constructor-based types.x === null or comparing literal values, narrow unions.null, undefined, false, 0, or "", though this can be risky.in operator narrows by checking whether a property exists on an object.kind.value is T, let you encode custom narrowing logic.Custom type guards are functions that return a boolean, but with a special return type like value is User. That tells TypeScript, "if this returns true, narrow the type to User." They’re useful when you get unknown or messy API data and want runtime checks plus compile time safety.
function isUser(x: unknown): x is User checks typeof x === 'object', non-null, and required fields like id and name.if (isUser(data)), TypeScript lets you access data.id safely.JSON.parse, where the input is basically untrusted.They represent very different levels of type safety.
any opts out of checking, you can do anything with it, and TypeScript will not help much. I use it only as a last resort, usually at messy boundaries or during migration.unknown means "I do not know yet." You can assign anything to it, but you must narrow it before using it. This is the safe choice for API responses, catch errors, or untrusted input.never means a value should never exist. It shows up in functions that always throw, infinite loops, or exhaustive switch checks.My rule of thumb is: prefer unknown over any when data is uncertain, use never to model impossible states, and avoid any unless you intentionally want to bypass the type system.
Mapped types let you transform an existing object type by iterating over its keys. The pattern is basically [K in keyof T]: ..., so you can say “for every property in T, make a new version with different rules,” like optional, readonly, or changed value types. They’re great when you want type changes to stay in sync with the source shape instead of duplicating interfaces.
Partial<T> makes every property optional, useful for patch/update inputs.Required<T> does the opposite, handy when defaults have been applied.Pick<T, K> selects a safe subset of fields, like DTOs or view models.Omit<T, K> removes fields, often for hiding internal or sensitive data.Record<K, V> builds an object type from a key union to one value type, like Record<'en' | 'fr', string>.I reach for them anytime I want reuse, less duplication, and stricter refactors.
They serve two different parts of the workflow, build output and developer experience.
sourceMap, link emitted JavaScript back to .ts files, so breakpoints, stack traces, and stepping in DevTools or VS Code show TypeScript lines instead of compiled JS..d.ts, are generated with declaration, and describe the public types of your code without shipping implementation..js for runtime, .map for debugging, and .d.ts for downstream type safety, depending on compiler options.The biggest challenges are usually type coverage, third party libraries, and team adoption. In a mature JavaScript codebase, you rarely get to "flip to TypeScript" all at once, so I treat it as a risk-managed migration.
allowJs, checkJs, and noEmit, so we get type feedback without breaking builds.unknown instead of any, add runtime validation, and tighten types over time.@types packages or write small declaration files as a bridge.In one migration, we converted shared domain models first and caught several inconsistent API assumptions before they hit production.
I keep React typing pragmatic: type the public API, let inference do the rest, and only get explicit when it improves safety or readability.
type or interface like type ButtonProps = { variant: 'primary' | 'ghost'; onClick?: () => void }, then function Button(props: ButtonProps).children?: React.ReactNode if the component actually accepts children. Don’t default to React.FC.forwardRef<HTMLButtonElement, ButtonProps>(...). For local refs, const ref = useRef<HTMLInputElement | null>(null).useState infer simple values, but specify unions like useState<string | null>(null). Type reducers more explicitly.React.ChangeEvent<HTMLInputElement>, React.MouseEvent<HTMLButtonElement>.React.ComponentProps<'button'> or extend ButtonHTMLAttributes<HTMLButtonElement> for wrapper components.I keep async typing simple and explicit. The main rule is, type the resolved value, and let async produce the Promise for you.
async function getUser(): Promise<User> over returning any or implicit shapes.const user = await fetchUser() where fetchUser returns Promise<User>.Promise<T> or async function request<T>(): Promise<T>.catch, treat errors as unknown, then narrow with instanceof Error or custom guards.Promise<Result<User, ApiError>> when failures are expected.Example approach: fetchJson<T> returns Promise<T>, callers get strong inference, and truly exceptional cases throw. That keeps business logic readable, while expected failures are modeled in the type system instead of hidden in exceptions.
I’d answer this with a quick STAR structure, situation, friction, tradeoff, result, then show that I used TypeScript pragmatically instead of dogmatically.
At a previous team, we integrated a third party payments API with inconsistent response shapes. TypeScript slowed us down at first because the generated types were incomplete, and we kept hitting edge cases the compiler could not model cleanly. Instead of forcing perfect typing everywhere, I drew a boundary. At the API layer, we used lightweight runtime validation and a few narrow unknown to typed conversions. Inside the app, we kept strict types so the rest of the codebase stayed safe. For urgent paths, I allowed temporary @ts-expect-error with a ticket linked for cleanup. That balance let us ship on time, while keeping type debt contained and visible.
TypeScript treats null and undefined as distinct types. Without strictNullChecks, they’re basically allowed everywhere, so a string can silently be null or undefined, which defeats a lot of the safety TypeScript is supposed to give you.
With strictNullChecks: true:
- null and undefined must be explicitly included, like string | null
- the compiler forces you to handle missing values before using them
- APIs like Array.find() return T | undefined, which is more honest
- narrowing works naturally, for example if (user) { ... } or if (value !== undefined)
It matters because most runtime bugs in JS come from accessing something that isn’t there, like user.name when user is undefined. strictNullChecks catches that at compile time instead of letting it blow up in production.
They solve slightly different problems around missing data, which is huge in API design.
?, like name?: string, meaning the field may be absent.obj?.profile?.email, so access stops safely if something is null or undefined.value ?? fallback, which only falls back for null or undefined, not 0, false, or "".For APIs, optional properties communicate what clients can omit or what responses may not include. That makes contracts clearer, but too many optionals can make models vague and force defensive code everywhere. Optional chaining helps consumers read nested response data safely, and ?? is better than || when defaults should preserve valid falsy values. In TypeScript, I usually pair this with strictNullChecks so the API surface reflects real absence explicitly.
They tighten up places where TypeScript is usually a bit permissive, so you catch bugs earlier instead of letting them slip to runtime.
noImplicitAny forces you to be explicit when TS cannot infer a type, which prevents accidental weak typing from spreading.noUncheckedIndexedAccess makes indexed access like obj[key] or arr[i] return T | undefined, so you handle missing keys and out of bounds cases.exactOptionalPropertyTypes treats optional props as truly "may be absent", not the same as prop: T | undefined, which makes API contracts more accurate.strictFunctionTypes checks function parameter variance more safely, especially for callbacks, so you do not pass handlers that accept narrower inputs than callers might provide.Net effect, fewer hidden assumptions, stronger contracts, and better refactoring safety.
Literal types let you say a value must be an exact string, number, or boolean, like 'GET', 200, or true. They’re great for fixed options, usually combined as unions, for example 'GET' | 'POST' | 'DELETE'.
Template literal types build new string types from other types, so you can model string patterns, not just fixed values.
type Method = 'GET' | 'POST'type RouteKey = \${Method}:/users`gives'GET:/users' | 'POST:/users'`type Lang = 'en' | 'fr' and type Key = \${Lang}_home_title``'PUT:/users'So they’re a clean way to model constrained string formats without runtime-only validation.
Function overloads let you describe multiple valid call signatures for one implementation. You write several overload signatures, then one implementation signature that is broad enough to handle them all. Callers see the specific signatures, not the implementation one. That gives better autocomplete and return type narrowing based on the arguments.
string -> number and number -> string.string | string[] returning boolean.fn(id) vs fn(first, last).extends means two related but different things depending on context.
T extends Foo means "T must be assignable to Foo". It restricts what types callers can pass in.interface A extends B or class A extends B means "A is a subtype of B". It creates a new type or class based on another.extends is checked at the point of use, inheritance defines the type itself.extends does not copy members, it just requires compatibility.extends also affects runtime behavior, like prototype chain and inherited methods. With interfaces, it is type-level only.Example: function f<T extends { id: string }>(x: T) accepts any type with id. But class User extends Person means User actually inherits from Person.
Declaration merging is a TypeScript feature where multiple declarations with the same name are combined into one type or entity. Common cases are interface merging, namespace merging, and augmenting existing module types from libraries.
Express.Request.I use it deliberately for library augmentation, but avoid it for core app domain types unless the extension point is very clear.
I usually prefer union literal types in TypeScript unless I specifically need runtime behavior.
enum exists at runtime, union literals like 'admin' | 'user' are type-only.const enum removes runtime cost, but can be fragile with build tools, transpilation, and library boundaries.I avoid enums when modeling fixed sets of strings in app code, API values, Redux actions, or component props. In those cases, unions plus a const object like as const usually give cleaner types and simpler output.
TypeScript adds OO features on top of JavaScript classes and mostly uses them at compile time. public is the default, private and protected restrict access in TypeScript, and readonly lets you assign a property once, usually in the declaration or constructor. abstract classes let you define shared behavior plus required methods, and they cannot be instantiated.
How much I rely on them depends on the codebase.
- I use classes when modeling stateful domain objects or framework patterns, like Angular services.
- I use readonly a lot, it is great for safer DTOs and config objects.
- I use private and protected moderately, mostly to clarify intent, not as hard runtime security.
- I use abstract classes sparingly, usually when I need shared implementation plus a contract.
- In many TS apps, I still prefer plain objects, functions, and interfaces for simplicity.
Decorators are functions that attach metadata or wrap behavior around classes and class members like methods, fields, accessors, or parameters. In TypeScript, you enable them with compiler options, and they are heavily used in frameworks such as Angular and NestJS.
@Injectable() or @Controller().I usually treat tsconfig as layered configs: a shared base, then environment-specific extensions. That keeps strictness consistent while changing only runtime assumptions.
tsconfig.base.json: strict, noUncheckedIndexedAccess, exactOptionalPropertyTypes, moduleResolution, path aliases, and common excludes.lib: ["ES2022"], add types: ["node"], pick module and moduleResolution based on ESM or CJS, often NodeNext for modern Node.lib: ["ES2022", "DOM", "DOM.Iterable"], usually no node types, and set JSX options if using React.declaration, declarationMap, emitDeclarationOnly if bundling separately, and avoid overly specific libs so consumers are not constrained.composite: true, one root solution config, package-level configs extending the base.tsconfig.test.json or tsconfig.build.json so Jest, Vitest, or scripts do not pollute app typing.These options control how TypeScript understands your environment and how it emits JavaScript, so they directly affect compatibility, bundling, and import resolution.
target sets the JS syntax TS emits, like ES2017 or ES2022, affecting output features, bundle size, and whether helpers/polyfills are needed.module controls how imports/exports are emitted, like commonjs, esnext, or nodenext, which must match your runtime or bundler.lib adds type definitions for built-in APIs, like DOM, ES2021, or WebWorker, affecting type checking only, not emitted code.moduleResolution tells TS how to resolve imports, like node, bundler, or nodenext, impacting path lookup and package export handling.paths creates import aliases, like @app/*, mainly for type resolution. It does not rewrite runtime imports unless your bundler or runtime also supports the alias.Bad combinations cause broken builds, missing types, or runtime import failures.
Modules are file based and use import and export. Namespaces are TypeScript specific wrappers, like namespace MyApp { ... }, that group code under a global object unless you use older patterns like triple-slash references.
Why namespaces are less common now: - ES modules are the JavaScript standard, so TypeScript aligns with how browsers, Node, and bundlers already work. - Modules have explicit dependencies, which makes code easier to navigate, test, tree-shake, and refactor. - Namespaces were more useful before module loaders and bundlers were common. - Modern tooling, like Vite, Webpack, Rollup, and native ESM, is built around modules, not namespaces. - Namespaces can still make sense for simple global scripts or when modeling an external global library, but that is a niche case now.
Project references let you split a TypeScript codebase into smaller composite projects that know about each other. Instead of one huge compile, TypeScript can build only the projects that changed, in dependency order, via tsc --build. That improves incremental build speed, editor performance, and boundaries between packages.
You’d use them when:
- You have a monorepo or large app with clear modules like shared, api, web
- Teams own separate areas and want explicit contracts between projects
- Full type checking is getting slow, especially in CI or local rebuilds
- You want cached outputs like .d.ts and .tsbuildinfo for faster rebuilds
- You need independent builds but still want safe cross-project typing
The tradeoff is a bit more config and stricter structure, but it pays off once the repo gets large.
TypeScript only protects you at compile time, so for untrusted input, like API responses, form data, or message queues, I treat everything as unknown first and validate at the boundary.
zod, io-ts, or valibot.unknown to a trusted type.safeParse style APIs when you want structured errors instead of exceptions.For example, I’d define a UserSchema, validate the raw JSON, reject or log bad payloads, and only pass the parsed result deeper into the app. That avoids the common mistake of doing as User, which silences TypeScript without making the data safe.
Yes. I have used Zod most, some Yup, and a bit of io-ts. They solve a real TypeScript gap: types disappear at runtime, so you still need validation for API payloads, forms, env vars, and third party data.
safeParse, then infer the TS type via z.infer<typeof Schema>.Either, but it is more verbose.I design discriminated unions around a single stable tag field, usually type or status, then make each variant hold only the data valid for that case. That gives you precise narrowing with switch or if, and it makes impossible states unrepresentable.
{status: 'idle'}, {status: 'loading'}, {status: 'success'; data: T}, {status: 'error'; error: Error}switch checks catch missing transitions at compile timestatus === 'success', TypeScript knows data existsFor state machines, this maps naturally to finite states and legal transitions. Instead of checking random booleans like isLoading && data, you model exactly what can happen, which reduces bugs and makes refactors safer.
TypeScript does two separate jobs, type checking and transpiling. Tools like Babel, esbuild, and swc mostly handle the transpilation part, so they strip types fast but do not do full semantic type checking. In practice, teams often pair them with tsc --noEmit in CI or a separate watch process to keep type safety.
tsc: type checks and transpiles, most accurate, usually slowest.Babel: transpiles TS syntax via @babel/preset-typescript, fast, no real type checking.esbuild: very fast transpilation and bundling, skips type checking.swc: similar to esbuild, very fast, good for large apps, no full type checking.ts-node: runs TS directly in Node, convenient for scripts/dev, slower unless using --transpileOnly.Typical trade-off is speed versus safety in the hot path. Fast transpilers improve local dev and builds, while tsc remains the source of truth for catching actual type errors.
.d.ts files are TypeScript declaration files. They describe the shape of a library, functions, classes, module exports, without containing runtime code. TypeScript uses them for type checking and IntelliSense, especially for plain JavaScript packages.
I’ve written or patched them in a few common cases:
- A third-party JS library had no types, so I created a minimal declare module 'lib' file with the APIs we actually used.
- A package had outdated typings, so I used module augmentation to add missing options or fix incorrect return types.
- In one project, a charting library exposed plugin hooks that weren’t typed, so I added interfaces for the callback params to make integrations safe.
- If it was a short-term fix, I kept a local declaration file. If it was broadly useful, I’d open a PR to DefinitelyTyped or the library itself.
I usually treat it as a risk-reduction exercise: start with the small surface area I actually use, then tighten types over time instead of trying to perfectly model the whole library on day one.
@types/..., types field, or JSDoc.*.d.ts file with declare module 'lib' and only define the functions, options, and return shapes I use.unknown over any, then narrow with wrappers, type guards, and validated adapters.I usually model states with discriminated unions, then force the compiler to prove I handled every case. In a switch, I add a default branch that assigns the value to never, like const _exhaustive: never = value. If I later add a new union member and forget to handle it, TypeScript errors immediately.
It’s valuable because it turns missing business logic into a compile-time failure instead of a runtime bug. That matters a lot for reducers, API result states, and UI rendering branches. You can do the same with if/else chains too, by ending with a never assertion helper like assertNever(x). It also makes refactors safer, since adding a new variant shows you every place that needs updating.
Mostly with React and Node.js, plus some Angular on enterprise apps. I use TypeScript to make component contracts, API boundaries, and shared domain models explicit, so refactors are safer and autocomplete is actually useful.
forwardRef.ChangeEvent<HTMLInputElement> versus broader synthetic events.props, emits, and keeping inferred types clean in composables.I’d answer this with a quick STAR structure: situation, the bug, the TypeScript signal, and the business impact.
At a previous team, we were refactoring a checkout flow after an API change. The backend renamed a field from totalAmount to amountTotal, but one part of the frontend still expected the old shape. TypeScript flagged it immediately because our API response types were generated and the component props were strictly typed. Without that, the UI would have rendered undefined, and in one case would have sent the wrong value into a payment summary. We caught it in development, updated the mapping layer, and avoided broken totals, payment confusion, and a pretty painful production hotfix.
I review the usual things in both, readability, correctness, tests, and maintainability, but TypeScript adds a whole extra layer: type design. I am not just asking "does it work?", I am asking "does the type system accurately describe the intent, and will it prevent future mistakes?"
any, unsafe casts, and non-null assertions, unless they are clearly justified.string | undefined or broad object shapes.In JavaScript, more of my focus goes to tests and defensive code. In TypeScript, I expect the types to carry part of that safety.
I’d answer this with a quick STAR structure: situation, approach, outcome, then emphasize risk control.
At a previous team, we had inconsistent API response types duplicated across dozens of frontend files. I refactored them into shared discriminated unions and a few reusable utility types. - First, I mapped usage with TS references, grep, and editor find-all, so I knew the blast radius. - I introduced new shared types alongside old ones, then migrated feature by feature instead of doing a big-bang change. - I used TypeScript errors as a checklist, strict mode, and a temporary compatibility layer to keep PRs reviewable. - To avoid regressions, I leaned on unit tests, a few integration tests around critical flows, and snapshotting API shapes. - Outcome, fewer duplicate types, safer narrowing, and future schema changes became much easier to roll out.
I’d optimize for readability first, then reuse. A clever utility is only worth it if it removes real duplication and stays understandable to the team.
Pick, Omit, Partial before inventing custom abstractions.My rule is, abstract stable patterns, not speculative ones. If I build a utility, I’d name it clearly, document the intent, and make sure it simplifies usage rather than showing off type gymnastics.
A few come up over and over, usually when teams want speed and accidentally erase TypeScript’s value.
any spreading through the codebase, it kills autocomplete, refactoring safety, and trust in types.as assertions, especially as unknown as X, which often hides real modeling problems.string or object instead of unions, discriminated unions, or proper interfaces.strict settings, especially strictNullChecks and noImplicitAny, which catches tons of bugs early.My fix is usually, tighten compiler settings, prefer explicit models, use utility types sparingly, and validate external data with something like Zod.
I’d treat it as a speed vs. risk conversation, not a hard no. as any can unblock something, but it also removes the compiler safety net, so I’d first ask what friction they’re hitting and whether there’s a narrower fix.
unknown plus validation.as any spread through shared APIs, because that’s where it creates long-term pain.In practice, I’ve done this on deadline-driven integrations, ship with one isolated assertion, then replace it once the backend contract was clarified.
I’d start with the highest leverage fixes, the ones that reduce bugs without freezing delivery. The goal is not to make everything “perfect TypeScript” on day one, it’s to create safer defaults and a path to consistency.
strict, noImplicitAny, and strictNullChecks.any, broad object, and unsafe assertions with explicit domain models.type vs interface.Why first? Compiler rules and conventions give immediate feedback, prevent more debt, and make incremental cleanup realistic.
I keep it pretty lightweight but consistent. I follow the official TypeScript release notes and the team’s blog, and I usually skim each release for anything that affects typing ergonomics, config changes, or editor tooling. I also watch a few library maintainers on GitHub and X, because real-world adoption tells me which features are actually useful versus just interesting on paper. If something looks relevant, I try it in a small sandbox before using it in production.
One recent improvement I liked was NoInfer, added to help control generic inference. It’s useful when a function has multiple inputs and you want one input to drive the inferred type, instead of TypeScript widening based on the other argument. That makes APIs safer and error messages clearer, especially in utility functions and reusable hooks.
Yes. I usually mentor by pairing, reviewing real PRs, and teaching concepts in the codebase instead of in isolation. That helps people connect TypeScript rules to actual bugs and design decisions.
unknown versus any. I used runtime validation examples to show how types become safer after checks.Partial or Pick. I introduced those only after they understood object shapes well.Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.
Comprehensive support to help you succeed at every stage of your interview journey
We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.
Find Typescript Interview Coaches