Master your next .NET interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.
Prepare for your .NET interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.
Thousands of mentors available
Flexible program structures
Free trial
Personal chats
1-on-1 calls
97% satisfaction rate
Choose your preferred way to study these interview questions
CLR is basically the .NET execution engine.
When you write C# or another .NET language, your code is not compiled straight to native machine code first. It gets compiled into IL, Intermediate Language. Then the CLR steps in and handles running that code.
What the CLR does:
- Converts IL into machine code using JIT, Just-In-Time compilation
- Manages memory with garbage collection
- Handles exceptions
- Enforces type safety
- Provides security and runtime checks
- Manages things like threading and assembly loading
The easy way to think about it: - Your code says what to do - The CLR figures out how to run it safely and efficiently on the machine
So if someone asks what CLR is in one line, I’d say:
"It’s the runtime in .NET that executes managed code and provides core services like memory management, garbage collection, exception handling, and JIT compilation."
A clean way to answer this is:
Example answer:
CTS, or Common Type System, is basically the rulebook for types in .NET.
It defines:
Why that matters:
string, int, class, interface, or exception means the same thing across languagesSo if I build a class library in C#, a VB.NET app can still consume it because CTS makes sure both sides agree on how types are represented and handled.
In short, CTS gives .NET type safety, consistency, and cross-language interoperability.
A delegate in .NET is a type that represents references to methods with a specific parameter list and return type. In essence, it acts like a function pointer in C or C++, but is type-safe and secure. You can use delegates to pass methods as arguments to other methods, design callback methods, and define event handlers.
To use a delegate, you first declare it, which specifies the signature of the method it can point to. Then, you create instances of the delegate, assigning them methods that match its signature. You can invoke these methods through the delegate instance, either synchronously or asynchronously. This is especially useful for designing extensible and flexible code, where the method to be invoked can be decided at runtime.
Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.
A delegate in .NET is essentially a type-safe function pointer or a reference to a method. It allows you to encapsulate a method call in a variable, which means you can pass methods around like any other object. Delegates are declared using the delegate keyword and can point to methods that match their signature.
An event, on the other hand, provides a way to define a publisher/subscriber relationship between classes. It uses delegates internally, but it adds a layer of abstraction that enforces encapsulation. Events are typically used to signal that something has happened, and subscribers can register their handlers to be notified when that event occurs. The addition of an event keyword in C# makes it so that subscribers can't invoke the event directly; only the class that declares the event can raise it.
The "using" statement in C# is primarily used to ensure that resources are disposed of properly. It's typically used with types that implement the IDisposable interface, like file streams, database connections, or other unmanaged resources. When the "using" block is exited, the Dispose method is automatically called, even if an exception is thrown inside the block. This helps in managing memory and resource leaks efficiently. It's essentially a form of syntactic sugar for a try/finally block that handles disposal.
I handle exceptions in .NET with a few simple rules:
Exception everywherefinally, using, or await using for cleanupA typical approach looks like this:
Validate early for expected issues
If something is predictable, like bad input or a missing argument, I use validation instead of relying on exceptions for normal control flow.
Catch at the right layer
For example:
SqlException or HttpRequestExceptionAt the API boundary, I return the right status code and a safe error message
Catch specific exceptions first
I avoid broad catch (Exception) unless it’s at the top level for global handling, logging, or returning a generic error response.
Preserve context
If I rethrow, I use throw; instead of throw ex; so I don’t lose the original stack trace.
Centralize cross-cutting handling
In ASP.NET Core, I usually use middleware or exception filters for consistent logging and responses.
Example:
HttpRequestExceptionI also use custom exceptions sparingly, mainly when they represent meaningful business cases, things like OrderNotFoundException or PaymentFailedException. That makes the code easier to reason about than throwing generic exceptions everywhere.
Garbage Collection in .NET is a process that automatically handles the allocation and release of memory in your applications. It’s designed to clean up and free memory that is no longer being used, which helps to prevent memory leaks and optimize memory usage.
The garbage collector works by dividing the managed heap into three generations: Gen 0, Gen 1, and Gen 2. It primarily focuses on objects that have a higher likelihood of becoming unreachable quickly, which are allocated in Gen 0. If an object survives the first round of garbage collection, it gets promoted to the next generation, and so on. This generational approach helps improve performance by minimizing the frequency and duration of garbage collection cycles.
You don't need to manually release memory for most objects; the garbage collector takes care of it. However, you can influence garbage collection through practices like implementing the IDisposable interface and explicitly calling the GC.Collect method, though the latter should be used judiciously.
Generics in .NET allow you to define classes, methods, delegates, or interfaces with a placeholder for the type of data they store or use. This means you can create a single class or method that works with different data types while maintaining type safety. For example, instead of creating multiple versions of a list to store integers, strings, or custom objects, you can create a generic list that adapts to whatever type you specify.
Using generics provides several benefits: improved code reuse, better compile-time type checking, and enhanced performance. They allow you to avoid boxing and unboxing for value types and the need to cast to and from object for reference types, which also reduces runtime errors.
Get personalized mentor recommendations based on your goals and experience level
Start matchingDependency Injection (DI) in .NET is a design pattern used to implement Inversion of Control (IoC) where the control of creating and managing dependencies is transferred from the class itself to an external entity. In practical terms, this means that instead of a class instantiating its dependencies directly, they are provided to the class via constructor parameters, properties, or method parameters. This approach promotes loose coupling and enhances testability and maintainability of code.
For example, consider a class ServiceA that depends on RepositoryA. Instead of creating an instance of RepositoryA inside ServiceA, you pass an instance of RepositoryA through the constructor of ServiceA. In .NET, the built-in DI container allows you to register your services and dependencies in the Startup.cs file (or Program.cs in .NET 6 and later). This way, when ServiceA is requested, the DI container injects the necessary dependencies automatically, making the code cleaner and easier to manage.
A NuGet package is essentially a single ZIP file with a .nupkg extension that contains compiled code (DLLs), related metadata, and other resources like configuration files. It serves as a way to share and reuse code, enabling developers to add third-party libraries or even their own reusable components to their projects seamlessly.
In a .NET project, you manage NuGet packages using the NuGet Package Manager in Visual Studio, the dotnet CLI, or the NuGet CLI. You can search for packages, install them, and manage their dependencies. Once installed, a NuGet package will automatically add the necessary references to your project, making it effortless to incorporate and utilize external libraries.
Extension methods in C# allow you to add new methods to existing types without modifying their source code. They are static methods but are called as if they were instance methods on the extended type. To create an extension method, you define a static class and then add static methods within it. The first parameter of each method specifies the type it extends, prefixed with the this keyword. They're particularly useful for enhancing classes you don't have direct control over, like .NET built-in types.
In ASP.NET, the main authentication types I’d call out are:
Windows Authentication
Uses the user’s Windows or Active Directory identity.
Best fit for internal apps on a corporate network, where users are already signed in.
Forms Authentication
Classic web app approach. The user logs in through a custom login page, and the app tracks the authenticated session, usually with a cookie.
Common in older ASP.NET MVC and Web Forms apps.
Cookie Authentication
Very common in ASP.NET Core. After login, the app issues an auth cookie, and that cookie is sent on later requests.
Good for server-rendered web apps.
Token-based Authentication
The app issues a token, often a JWT, and the client sends it with each request.
This is the go-to for APIs, SPAs, and mobile apps because it works well in stateless systems.
OAuth
More about delegated authorization than direct authentication, but people often mention it here.
It lets users sign in with providers like Google, Microsoft, or Facebook.
OpenID Connect
This is the authentication layer commonly used on top of OAuth 2.0.
If you want social login or single sign-on, this is usually the more accurate protocol to mention.
Identity-based Authentication
In ASP.NET Core, ASP.NET Core Identity is the membership system that helps manage users, passwords, roles, MFA, lockouts, and login flows.
It is not a protocol itself, but it is often part of how authentication is implemented.
If I were answering in an interview, I’d also mention this distinction:
That usually shows you understand the bigger picture, not just the list.
Partial classes in C# allow you to split the definition of a class across multiple files. This can be particularly useful in large projects where splitting the class file can improve manageability and readability. When compiled, all the parts are combined into a single class by the compiler. Typically, this feature is used in scenarios involving auto-generated code and manual code, allowing developers to work on one part without interfering with the other.
In ASP.NET MVC, I usually think about validation in three layers, model, server, and client.
Model-level validation
This is the most common starting point. I add Data Annotations on view model properties, things like:
Required
StringLengthRangeRegularExpressionCompareThat keeps the rules close to the data and makes them easy to maintain.
In the controller, after model binding, I check ModelState.IsValid. If it is not valid, I return the same view and show the validation messages.
For more complex cases, I use:
IValidatableObject, if validation depends on multiple fieldsModelState.AddModelError(...), if a rule comes from business logic or a database checkA simple example would be a registration form:
Email is marked as required and must match email formatPassword has minimum length rulesConfirmPassword uses Compare to match the passwordModelState error in the controller or service layerSo overall, I use Data Annotations for standard rules, ModelState on the server for enforcement, and client-side validation for a smoother UX.
IEnumerable is used for querying data from in-memory collections like arrays or lists. It works well for LINQ-to-Objects queries and offers deferred execution, meaning the query is only executed when you iterate over it.
IQueryable, on the other hand, is designed for querying data from out-of-memory sources like databases. It enables LINQ-to-SQL or Entity Framework functionality, allowing you to execute queries on a remote datastore. Because IQueryable builds an expression tree, it lets the backend provider decide how to translate and execute the query, optimizing performance.
The IDisposable interface is used to release unmanaged resources like file handles, database connections, or memory allocated outside the .NET runtime. Implementing IDisposable ensures that these resources are properly cleaned up, which helps prevent resource leaks and improves application performance. The core method in this interface is Dispose(), which should be called when the object is no longer needed. This way, you can explicitly control the cleanup process, rather than relying on the garbage collector, which might not immediately reclaim unmanaged resources.
Method overriding in C# occurs when a subclass provides a specific implementation for a method that is already defined in its superclass. The overridden method in the subclass should have the same signature as the method in the parent class, and it uses the 'override' keyword.
Method overloading, on the other hand, happens within the same class and involves creating multiple methods with the same name but different parameters (either in type, number, or both). Overloading increases the readability of the program and allows different versions of a method to be called based on the argument types and numbers.
So, overriding is about redefining a method in a subclass to change or extend its behavior, while overloading is about having multiple methods with the same name but different signatures within the same class.
Middleware in ASP.NET Core is part of the request pipeline and is used to handle requests and responses. Essentially, each piece of middleware sits in the pipeline to process incoming requests and can forward these requests to the next middleware in the sequence or terminate the request right there. This allows for a highly flexible and modular approach to request processing.
When a request hits the server, it goes through a series of middleware components, each potentially modifying the request or performing specific tasks like authentication, logging, or data compression. Middleware components are added to the pipeline in the Configure method of the Startup class using the app.Use method. The order in which middleware is added is crucial because it dictates the sequence of operations executed for each request.
TPL, or the Task Parallel Library, is the .NET framework for doing concurrent and parallel work without managing threads directly.
At a high level, it gives you a better abstraction than raw Thread or manual ThreadPool usage.
What it does well:
Task or Task<T>async and awaitThe key idea is this:
For example:
Task.Run(...) is a simple way to queue CPU-bound workTask.WhenAll(...) lets you wait for multiple tasks togetherParallel.For and Parallel.ForEach help with data parallelism when the same operation needs to run across many itemsWhy it’s useful:
One important distinction:
async and await for non-blocking asynchronous flows, often I/O-bound operationsA simple way to explain it in an interview is:
“TPL is .NET’s task-based concurrency model. It lets you run and coordinate work using Task objects instead of managing threads yourself. It’s useful for parallel execution, background processing, continuations, cancellation, and exception handling, and it’s a big improvement over working directly with Thread or ThreadPool.”
Threading in .NET is basically about letting your app do more than one thing at a time without everything blocking on a single path of execution.
A simple way to think about it:
Thread is an actual execution pathTask is a higher-level way to represent work that may run on a thread pool threadasync/await is the preferred way to handle non-blocking I/O, like database calls, HTTP requests, or file accessThe important distinction is this:
In practice, most modern .NET code uses:
Taskasync/awaitParallel.ForEach for parallel CPU work when it actually makes senseManual Thread creation still exists, but it is less common because it is lower level and harder to manage.
The main benefits of threading are:
The main risks are:
So if I were explaining my approach in a .NET app, I would say:
async/await for I/O-bound operationsTask and parallel processing carefully for CPU-bound workThat shows you understand both the concept and how it is used in real production code.
Handling deadlocks in a multi-threaded environment starts with prevention through careful design. One effective method is to acquire locks in a consistent order across all threads, which helps avoid circular wait conditions. Also, introducing a timeout mechanism while acquiring locks can help detect potential deadlocks. If a thread can't obtain a lock within the specified timeout, it can roll back its operations and retry, thus breaking the deadlock.
Another strategy is to minimize the scope and duration of locks, reducing the chances of contention. Additionally, using higher-level concurrency constructs, like the Task Parallel Library in .NET, can help manage complex synchronization scenarios more safely than manual threading and locking.
Finally, proper monitoring and logging are crucial. Logging can help detect deadlocks early in the development cycle by capturing details about threads and locks. Tools like Visual Studio's concurrency profiler can also be used to visualize and diagnose deadlocks in running applications.
Razor is the view syntax ASP.NET uses to mix C# with HTML.
The main idea is simple:
@You’ll mostly see it in:
.cshtmlA few common uses:
@Model.NameWhy it’s useful:
Example, if you have a list of products, Razor lets you loop through Model.Products in the view and generate an <li> for each one. So instead of building HTML in code, you keep the markup in the view where it belongs, and just drop in the bits of C# needed to render dynamic data.
I think about ASP.NET security in layers. Not just login, but the full path from the browser to the database to deployment.
A solid way to answer this is:
Then give a practical example of how you apply it.
For me, the main areas are:
Turn on HSTS in production
Strong authentication and authorization
Follow least privilege, users and services only get what they need
Protect against common attacks
Use anti-forgery protection for form posts to prevent CSRF
Secure cookies and sessions
HttpOnly and SecureSameSite policiesDon’t store sensitive data in the client
Secrets and configuration
Separate config by environment
Error handling and logging
Monitor logs and alerts
Keep dependencies and platform updated
Run dependency and vulnerability scans in CI/CD
Add security headers where appropriate
Content-Security-PolicyX-Content-Type-OptionsX-Frame-Options or frame-ancestorsReferrer-Policy
Secure APIs too
A concrete example:
On a recent internal app, I secured it by:
That gave us a good baseline, and then we reviewed it during testing with automated scans and a quick manual security checklist before release.
I’d explain it like this:
An interface is a contract.
ILogger, IDisposable, or IEnumerableAn abstract class is a base type with shared behavior.
The practical difference is usually this:
interface when you want to describe what something can doabstract class when you want to share code and state across related typesQuick example:
interface IVehicle might require Start() and Stop()abstract class Vehicle might store shared data like Speed and provide common logic for accelerationOne extra note, in modern C#, interfaces can also have default implementations. Even with that, I still think of interfaces as contracts first, and abstract classes as a tool for inheritance and shared behavior.
A singleton is a pattern where a class can only have one instance, and the whole app uses that same instance.
You usually use it for things like:
In .NET, the basic implementation is:
sealed, so it cannot be inheritedprivate, so nobody can create it directlyA clean version looks like this in practice:
private static readonly Singleton _instance = new Singleton();private Singleton() { }public static Singleton Instance => _instance;Why this works:
static readonly creates one instance for the lifetime of the app domainnew Singleton() from outside the classOne important point in .NET:
Lazy<T> instead of writing your own locking unless you really need toExample idea:
private static readonly Lazy<MyService> _instance = new(() => new MyService());public static MyService Instance => _instance.Value;One practical note, in modern .NET apps, especially with ASP.NET Core, I would usually prefer dependency injection over a classic singleton pattern. If I need one shared instance, I register it as a singleton in the DI container. That gives me the same lifecycle benefit, but with better testability and cleaner design.
The easiest way to explain it is this:
In synchronous programming, work happens step by step.
Example: - Read a file - Wait until it finishes - Then call the database - Wait again - Then return the response
Asynchronous programming is different. It lets your app start an operation, like a database call or HTTP request, and use that time more efficiently instead of just sitting there waiting.
In .NET, this usually means using async and await.
async marks a method that contains asynchronous workawait pauses that method until the operation completesThat matters most for I/O-bound work, like:
Why it matters:
One important point, async does not mean faster CPU work by default.
So in an interview, I would frame it like this:
In .NET, asynchronous programming is especially useful in ASP.NET Core, where freeing up threads during I/O helps the app handle more requests under load.
Managed code is code that runs under the .NET runtime, specifically the CLR.
In simple terms:
IL, intermediate languageWhat the CLR handles for you:
Why it matters:
A simple way to explain it in an interview is:
Example:
So if I had to say it naturally in an interview:
"Managed code is code that executes under the control of the CLR in .NET. Instead of compiling straight to native machine code, it first compiles to IL, and then the runtime converts it when the app runs. The big advantage is that the CLR provides services like garbage collection, type checking, and exception handling, which makes the application safer and easier to maintain."
I’d answer this by defining what an assembly is first, then listing the common types and why they matter.
An assembly in .NET is the compiled unit that the CLR loads and runs.
It usually contains: - IL code, the compiled intermediate code - Metadata, like type information and references - A manifest, which includes version, culture, and assembly identity - Optional resources, like images or localized strings
So in practice, an assembly is both: - a deployment unit - a versioning unit
Common assembly types:
Best when the dependency is only meant for that one app
Shared assembly
You can also mention the physical forms:
- DLL, a class library or reusable component
- EXE, an executable assembly
If I wanted to keep it interview-friendly, I’d say:
“An assembly is the basic compiled unit in .NET that the runtime loads. It contains the code plus metadata and manifest information for versioning and identity. The main types are private assemblies, which are used by one app, and shared assemblies, which can be used across multiple apps, traditionally through the GAC.”
LINQ is basically a way to query and shape data directly in C#.
Instead of writing a bunch of nested loops, temp variables, and if statements, you can do things like:
WhereOrderBySelectGroupByAny, All, FirstOrDefaultWhy it’s useful:
A simple example is filtering a list of customers to find active ones, then sorting by name. With LINQ, that becomes a couple of readable method calls instead of a manual loop.
In interviews, I’d describe it as, “LINQ is C#’s built-in querying feature for collections and other data sources. It’s useful because it makes data operations more readable, concise, and type-safe.”
I’d explain it by comparing them across three things: platform support, current usage, and where Microsoft is investing.
.NET FrameworkIt’s mature and stable, but basically in maintenance mode now. Microsoft still supports it, but new innovation is happening elsewhere.
.NET Core
It introduced the modern direction of .NET, but the name stopped after .NET Core 3.1.
.NET 5+
.NET Core..NET 5, Microsoft dropped the word “Core” and moved to one unified platform: .NET 5, .NET 6, .NET 7, .NET 8, and so on..NET Core, and it’s now the main platform for new development.The simple way to remember it:
.NET Framework = legacy, Windows-only.NET Core = modern, cross-platform foundation.NET 5+ = current unified .NET, and the future going forwardIn practice, if a company has an older internal app, it might still be on .NET Framework. If they’re building new APIs, cloud services, or modern apps, they’re usually on .NET 6+ or newer.
I’d answer this by grouping the improvements into a few buckets:
Then I’d give a few concrete examples instead of listing every feature.
A solid answer would be:
ASP.NET Core is a big step forward from classic ASP.NET because it was built for modern application development.
Some of the biggest improvements are:
Cross-platform support
It runs on Windows, Linux, and macOS, which gives teams much more flexibility in development and deployment.
Much better performance
ASP.NET Core is lighter, faster, and designed for high-throughput web apps and APIs. Kestrel also made hosting more efficient.
Modular architecture
Instead of bringing in a huge framework by default, you only add the packages and middleware you need. That keeps apps leaner and easier to maintain.
Built-in dependency injection
In older ASP.NET, DI often required third-party tools and extra setup. In ASP.NET Core, it’s part of the framework, which makes application design cleaner and more testable.
Unified framework
MVC and Web API were brought into a more consistent model, so building web apps and REST APIs feels much more streamlined.
Better cloud and container readiness
ASP.NET Core was designed with cloud deployment in mind, so it works well with Docker, Kubernetes, environment-based config, and scalable hosting.
Cleaner configuration and middleware pipeline
Things like routing, authentication, logging, and error handling are more explicit and easier to control through the request pipeline.
Open source and faster evolution
Because it’s open source and maintained publicly, improvements happen faster and the ecosystem is more transparent.
If I wanted to keep it very concise in an interview, I’d say:
“ASP.NET Core improved on older ASP.NET by being cross-platform, faster, more modular, and more cloud-friendly. It also introduced built-in dependency injection, a cleaner middleware pipeline, and a more unified way to build MVC apps and APIs.”
In C#, the core difference is this:
That affects copying, memory allocation, nullability, and performance.
Examples:
- int
- double
- bool
- char
- struct
- enum
- DateTime
- Guid
How they behave: - The variable contains the value itself. - Assigning one value-type variable to another copies the data. - Each variable gets its own independent copy.
Example:
- int a = 5;
- int b = a;
- Changing b does not affect a.
Memory: - Often allocated inline, meaning directly inside the containing object or stack frame. - People say "value types live on the stack", but that is not always true. - If a value type is a field in a class, it lives inside that object on the heap. - If it is boxed, it gets wrapped in a heap object.
Examples:
- class
- string
- array
- delegate
- object
How they behave: - The variable holds a reference to an object. - Assigning one reference variable to another copies the reference, not the object. - Two variables can point to the same object.
Example:
- var p1 = new Person();
- var p2 = p1;
- Modifying p2.Name also affects p1.Name, because both references point to the same object.
Memory: - The actual object is typically allocated on the heap. - The reference itself is just a small value stored wherever the variable lives, stack, field, etc. - Heap objects are managed by the garbage collector.
This is the interview-friendly distinction:
That means: - Value type copies are isolated - Reference type copies share the same underlying object
Value types can be faster when: - They are small - They are short-lived - You want to avoid heap allocation - You want better cache locality
Why: - No separate object allocation - Less garbage collection pressure - Data can be packed more tightly
But value types can be worse when: - They are large structs - They get copied a lot - They are boxed frequently
A large struct passed around by value can be expensive because every assignment or method call may copy all its fields.
Reference types can be better when: - The object is large - You want to share one instance - You need polymorphism - You want to avoid repeatedly copying large data
But they come with: - Heap allocation cost - Garbage collection overhead - Potentially worse memory locality
Important interview topic.
Boxing:
- Converts a value type to object or an interface it implements
- Creates a heap allocation
Unboxing: - Extracts the value type back out - Requires a cast
Why it matters: - Boxing adds allocation and GC pressure - It can hurt performance in tight loops
Example cases:
- Putting int into a non-generic collection like ArrayList
- Passing a struct as object
Generic collections like List<int> avoid boxing.
Nullability
Value types are non-null by default, int, bool
?, like int?Reference types can be null, though nullable reference types help express intent in modern C#
Strings are a special case
string is a reference type, but it behaves value-like in some ways because:
- It is immutable
- Reassigning a string variable does not modify the original string object
Still, technically, it is a reference type allocated on the heap.
Use a struct when:
- The type is small
- Represents a single value
- Is immutable or mostly immutable
- Does not need inheritance
Use a class when:
- The type is larger or more complex
- Needs identity
- Needs inheritance or shared mutable state
If you want a crisp answer:
If they push deeper, mention: - Value types are not always on the stack - Reference variables are not the object itself - Boxing is a common performance trap - Large structs can be slower than classes due to copy cost
A strong practical example is comparing Point as a small struct versus Person as a class:
- Point is tiny and value-like, struct makes sense
- Person has identity and shared state, class makes sense
A strong way to answer this is with a simple STAR structure:
For debugging questions, interviewers usually want to hear that you were methodical, data-driven, and calm under pressure. So I’d focus on:
Here’s how I’d answer it:
In one project, we had a .NET API that would intermittently slow down and sometimes time out during peak business hours. The tricky part was that it never happened in lower environments, and there were no obvious exceptions in the application logs.
I was responsible for figuring out whether the issue was in our code, the database, or infrastructure. My first step was to avoid guessing and gather better data. I added more structured logging around the slow endpoint, including correlation IDs, execution timing for key steps, and the specific database calls being made. I also used Application Insights to trace requests end-to-end and compare successful versus failing requests.
Once I had that visibility, I noticed the slowdown was always tied to one EF Core query. On the surface, the query looked fine, but under production-sized data it was generating inefficient SQL and pulling back far more data than expected because of an Include chain and a filter being applied too late.
To confirm it, I reproduced the issue against a copy of production-like data, captured the generated SQL, and reviewed the execution plan with our DBA. That showed a table scan on a high-traffic table and a big spike in memory usage.
The fix was a combination of changes:
- I rewrote the query to project only the fields we actually needed
- I removed unnecessary Includes
- I moved filtering earlier in the query
- We added an index to support the access pattern
After that, the endpoint response time dropped from several seconds to a few hundred milliseconds, and the timeouts stopped. As a follow-up, I added query performance logging and a load test around that workflow so we could catch similar regressions before release.
What I like about that example is it shows a structured debugging approach, not just a lucky fix.
I’d handle this in two parts, how to answer it in an interview, and what I’d actually do.
How to structure the answer: 1. Stabilize first, reduce user impact. 2. Gather evidence, don’t guess. 3. Isolate the bottleneck, app, DB, external dependency, infrastructure. 4. Fix the immediate issue. 5. Add protections so it does not happen again.
A strong answer sounds calm and methodical, especially under pressure.
What I’d do:
Fail over or degrade gracefully for expensive features.
Look at telemetry first I’d go straight to observability tools, App Insights, Datadog, Grafana, ELK, whatever the team uses.
I’d check: - Exception details, stack traces, inner exceptions. - Request rate, response times, failure percentage. - CPU, memory, thread pool usage, GC activity. - DB connection pool usage, query duration, deadlocks, timeouts. - Outbound HTTP dependency failures, latency, socket exhaustion signs. - Pod or VM restarts, container OOM kills, ingress or load balancer errors.
The key is to correlate the 500s with a resource or dependency signal.
At this stage I’m trying to answer, is it code, config, infrastructure, or a downstream dependency?
Common .NET-specific causes I’d investigate Under heavy load, the usual suspects are:
Thread pool starvation
.Result, .Wait(), sync I/O in request path.Fix: make hot paths fully async, remove blocking, review middleware and filters.
Db connection pool exhaustion
Fix: ensure connections are disposed properly, optimize queries and indexes, reduce chatty data access, tune pool settings carefully.
HttpClient misuse, socket exhaustion
new HttpClient() per request.Fix: use IHttpClientFactory, set sensible timeouts, retries, circuit breakers.
Memory pressure or GC pauses
Fix: reduce allocations, stream large payloads, cap cache size, review object lifetimes.
Lock contention or shared state issues
Fix: remove shared mutable state, use concurrent collections or redesign contention points.
Unhandled exceptions from edge cases
Fix: inspect logs by endpoint, payload shape, user path; add validation and better exception handling.
Check recent changes I’d always ask:
A lot of production incidents come from a small change that only breaks at scale.
IHttpClientFactory and Polly policies for retries, timeout, circuit breaker.Add backpressure, queueing, or rate limiting for bursty workloads.
Verify in production After the change:
Make sure the fix works under repeated load testing, not just at normal traffic.
Prevent recurrence I’d add:
If I wanted to make this sound strong in an interview, I’d say something like:
“My first step is to reduce customer impact, rollback, scale out, or rate limit if needed. Then I’d use telemetry to correlate the 500s with app exceptions, resource saturation, or dependency failures. In a .NET API under heavy load, I’d specifically check for thread pool starvation, DB connection pool exhaustion, HttpClient misuse, memory pressure, and slow downstream calls. Once I isolate the bottleneck, I’d implement the fix, validate it with load testing, and add monitoring and resilience so it does not happen again.”
If you want, I can also turn this into a 60-second interview answer.
In ASP.NET Core DI, interface-based registration means you register a contract to an implementation, like IEmailSender -> SmtpEmailSender.
Example:
- services.AddTransient<IEmailSender, SmtpEmailSender>()
- services.AddScoped<IEmailSender, SmtpEmailSender>()
- services.AddSingleton<IEmailSender, SmtpEmailSender>()
The difference is the lifetime of the created object.
IEmailSender, they each get separate instances.Use it for: - Lightweight, stateless services - Services with no shared state - Pure business logic helpers
Examples: - Formatters - Mappers - Calculation services - Small domain services
Watch out for: - Too many object creations if the service is expensive - Injecting transient into singleton can be okay, but you need to understand it gets created when the singleton is built if resolved there
Use it for:
- Services that should share request-specific state
- Services that work with DbContext
- Unit of work style services
Examples: - Repository or application services that use EF Core - Current-user context services - Request-level caching services
Most common real-world choice: - Scoped is often the default for services that access the database
Watch out for: - Never inject scoped services directly into singletons - That causes lifetime mismatch and can lead to runtime errors or incorrect behavior
Use it for: - Stateless services that are expensive to create - Shared infrastructure - App-wide caches or configuration-like services
Examples: - In-memory cache wrappers - Precomputed lookup services - Services holding reusable expensive resources - Custom configuration providers
Watch out for: - Must be thread-safe - Should not depend on scoped services - If it stores mutable state, that state is shared across all users and requests
How I usually choose:
- Transient for simple stateless logic
- Scoped for request-based work, especially anything touching EF Core
- Singleton for shared, thread-safe, app-wide services
A practical mental model: - Transient = new every time - Scoped = once per request - Singleton = once per app
Example interview answer:
“I register services by interface so the implementation is decoupled from consumers and easy to swap or test. Then I choose the lifetime based on how long the instance should live. Transient creates a new instance every resolution, so I use it for lightweight stateless services. Scoped creates one instance per request, so it’s ideal for database-related services and request-specific state. Singleton creates one instance for the whole app lifetime, so I use it for shared, thread-safe services like caches or expensive reusable components. The main thing I watch for is lifetime mismatches, especially not injecting scoped services into singletons.”
I’d handle that by balancing engineering judgment with team alignment.
A good way to answer this in an interview is:
My approach:
For example, maintainability, performance, delivery speed, cost, team familiarity, or scalability.
Then I make the options explicit.
In a .NET solution, that might be:
IQueryable exposure vs strict repository boundariesI like to compare options against a few agreed criteria:
how reversible the decision is
If the team is stuck in opinions, I push for evidence.
Check how well each option fits the existing architecture
If there is still no consensus, I use a decision owner model.
Once a decision is made, I commit fully, even if my preferred option was not chosen.
After implementation, I like to revisit the decision.
Example answer:
“In development teams, I try to treat disagreement as a good sign, because it usually means people care about quality. My first step is to align on what matters most for that decision. In a .NET project, that could be delivery speed, clean architecture, performance, or ease of testing.
On one project, we had a disagreement about whether a new reporting feature should be built using our standard EF Core service pattern or a separate optimized query approach with stored procedures. One group wanted consistency with the rest of the codebase, the other was worried about query performance because the dataset was large.
I helped structure the discussion around tradeoffs instead of preferences. We agreed on criteria like response time, maintainability, and implementation effort. Then we did a small spike with both approaches. The results showed EF Core was fine for most of the application, but this specific reporting endpoint performed much better with a targeted SQL-based solution.
We decided to keep the main architecture consistent, but allow an exception for that hot path. That gave us the performance we needed without overcomplicating the whole system. The key was making the decision based on evidence, documenting why we chose it, and making sure the team aligned behind it afterward.”
That answer works well because it shows: - collaboration - technical maturity - pragmatism - ability to handle conflict without becoming rigid or personal
Entity Framework, usually EF or EF Core, is Microsoft’s ORM for .NET.
At a high level, it lets you work with your database using C# objects instead of hand-writing SQL for everything.
How it works:
Customer, Order, or ProductThen you use a DbContext, which acts like the bridge between your app and the database.
With that, you can:
SaveChanges()What happens under the hood:
Example:
context.Orders.Where(o => o.CustomerId == 1), EF converts that into a SQL SELECTUPDATE statement when you call SaveChanges()Why people like it:
One important nuance, EF is really productive, but it is not magic.
You still need to understand:
So my short version is, Entity Framework is a tool that lets .NET developers talk to relational databases through C# objects and LINQ, while EF handles the mapping, SQL generation, and change tracking behind the scenes.
A Web API is basically a backend service that exposes data or business actions over HTTP.
Clients call it using URLs and HTTP verbs like:
GET to read dataPOST to create somethingPUT or PATCH to updateDELETE to removeIn .NET, you usually build this with ASP.NET Core Web API. It lets browsers, mobile apps, frontend SPAs, and other services talk to your application in a standard way, usually with JSON.
How I’d explain creating one in .NET:
ASP.NET Core Web API template in Visual Studio or with the dotnet new webapi command.That gives you the basic setup, routing, config, and often Swagger out of the box.
Define endpoints
ProductsController.Then add actions mapped to routes and verbs, like GET /api/products or POST /api/products.
Add your business logic
The service layer talks to the database, often through Entity Framework Core or a repository.
Return HTTP responses
200 OK, 201 Created, 400 Bad Request, 404 Not Found.The API usually sends JSON back to the client.
Test and document it
A simple real-world example:
- GET /api/orders/123 returns order details
- POST /api/orders creates a new order
- DELETE /api/orders/123 removes it
If I were answering in an interview, I’d keep it practical: “A Web API is an HTTP-based interface that lets other systems or frontends interact with your application. In .NET, I’d usually create one with ASP.NET Core Web API, define controllers or minimal API endpoints, wire in services and data access, and expose REST-style routes that return JSON. Then I’d test it with Swagger or Postman.”
For this kind of question, I’d answer it with a simple structure:
A concrete example:
I worked on a .NET API for an internal order processing platform. It handled product lookup, pricing, and order submission for several downstream systems. Usage grew pretty quickly, and during peak hours we started seeing slow response times, timeouts, and higher CPU on the app servers.
What I identified:
How I approached it:
Changes I made:
Select() instead of loading full entity graphs.The impact:
What I liked about that project was that it reinforced a good performance habit, measure first, fix the biggest bottleneck, then re-measure. A lot of the gain did not come from one dramatic change, it came from cleaning up several small inefficiencies across the request path.
I’d answer this in layers: goals, architecture, implementation, operations.
In production, logging and monitoring should help with three things:
So I’d design for:
Safe handling of sensitive data
Logging design
I’d use structured logging, not plain text.
That means every log event is machine-queryable with properties like:
TimestampLevelMessageTemplateRequestIdTraceIdUserId, if safeTenantId, if applicableEnvironmentServiceNameExceptionOrderId, PaymentIdIn ASP.NET Core, I’d usually use:
ILogger<T> everywhere in the appTypical sinks:
Central log platform like Elasticsearch/Kibana, Seq, Datadog, Splunk, or Azure Monitor
Log levels strategy
I’d be intentional about log levels:
Information for normal important app events, app start, request completed, order placedWarning for unexpected but recoverable situations, retries, validation anomalies, downstream slownessError for failed operationsCritical for app-wide failures, startup failure, database unavailableDebug and Trace only when needed, usually disabled in production by defaultA common mistake is logging too much at Information and creating noise. I prefer fewer, high-value logs.
I’d log key lifecycle and business events:
I would avoid:
Logging sensitive PII unless explicitly required and masked
Correlation and tracing
This is huge in production systems.
Every request should have a correlation ID or trace ID so I can follow it across:
Best modern approach:
TraceId and SpanIdThat lets me jump from an alert, to a trace, to the exact logs for that request.
I’d split monitoring into three pillars:
Key application metrics:
Business metrics too, if relevant:
Inventory sync lag
Alerting strategy
I’d avoid alerting directly on raw logs too much. Metrics are better for alerts.
Examples of useful alerts:
I’d make alerts actionable, not noisy.
Each alert should answer:
Where to start investigating
Implementation in ASP.NET Core
At a practical level, I’d do this:
Logging:
- Register Serilog early in Program.cs
- Read config from appsettings.*.json
- Write to console in JSON format
- Enrich logs with machine name, environment, app name, trace IDs
Use ILogger<T> in services/controllers:
- Log at boundaries and important business actions
- Use message templates like logger.LogInformation("Order {OrderId} created for customer {CustomerId}", orderId, customerId);
- Always log exceptions with the exception object, not just the message
Middleware:
- Add request logging middleware
- Capture request method, path, status code, duration
- Optionally exclude noisy endpoints like /health
Global exception handling: - Use exception handling middleware - Return safe error responses to clients - Log the full exception internally with correlation info
Health checks:
- Add ASP.NET Core health checks for:
- App liveness
- DB connectivity
- External dependencies, if appropriate
- Expose /health/live and /health/ready
OpenTelemetry: - Instrument ASP.NET Core, HttpClient, and database access - Export traces and metrics to something like Jaeger, Grafana, Azure Monitor, Datadog, etc.
A common production setup could be:
Alerts route through PagerDuty, Opsgenie, Teams, or Slack
Operational practices
I’d also mention that design is only half the story. Operations matter.
I’d put in place:
Periodic review of noisy logs and noisy alerts
What I’d say in an interview as a concise answer
I’d design logging and monitoring around observability. In the ASP.NET Core app, I’d use ILogger<T> with structured logging, usually backed by Serilog, and send logs to a centralized platform. I’d enrich every log with correlation data like TraceId, environment, and service name, and I’d be strict about not logging secrets or sensitive payloads.
For monitoring, I’d use OpenTelemetry for metrics and distributed tracing, instrumenting ASP.NET Core, HttpClient, and database calls. I’d track request rate, error rate, latency percentiles, dependency failures, resource usage, and key business metrics. I’d expose health checks for liveness and readiness, build dashboards, and create actionable alerts on symptoms like high error rate, slow p95 latency, failed health checks, and growing queue backlog.
The main principle is, when production breaks, I want to go from alert, to trace, to correlated logs, to root cause fast.
Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.
Comprehensive support to help you succeed at every stage of your interview journey
We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.
Find .NET Interview Coaches