.NET Interview Questions

Master your next .NET interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.

Find mentors at
Airbnb
Amazon
Meta
Microsoft
Spotify
Uber

Master .NET interviews with expert guidance

Prepare for your .NET interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.

Thousands of mentors available

Flexible program structures

Free trial

Personal chats

1-on-1 calls

97% satisfaction rate

Study Mode

Choose your preferred way to study these interview questions

1. What is CLR (Common Language Runtime) in .NET?

CLR is basically the .NET execution engine.

When you write C# or another .NET language, your code is not compiled straight to native machine code first. It gets compiled into IL, Intermediate Language. Then the CLR steps in and handles running that code.

What the CLR does: - Converts IL into machine code using JIT, Just-In-Time compilation - Manages memory with garbage collection - Handles exceptions - Enforces type safety - Provides security and runtime checks - Manages things like threading and assembly loading

The easy way to think about it: - Your code says what to do - The CLR figures out how to run it safely and efficiently on the machine

So if someone asks what CLR is in one line, I’d say:

"It’s the runtime in .NET that executes managed code and provides core services like memory management, garbage collection, exception handling, and JIT compilation."

2. What is the role of CTS (Common Type System) in .NET?

A clean way to answer this is:

  1. Start with the one-line purpose.
  2. Mention why it matters across .NET languages.
  3. Give a simple example of interoperability.

Example answer:

CTS, or Common Type System, is basically the rulebook for types in .NET.

It defines:

  • what a type is
  • how types are declared
  • how they behave at runtime
  • how different .NET languages understand the same data

Why that matters:

  • C#, VB.NET, and F# can all compile to the same runtime model
  • a string, int, class, interface, or exception means the same thing across languages
  • code written in one .NET language can be used by another without type confusion

So if I build a class library in C#, a VB.NET app can still consume it because CTS makes sure both sides agree on how types are represented and handled.

In short, CTS gives .NET type safety, consistency, and cross-language interoperability.

3. What is a delegate in .NET and how is it used?

A delegate in .NET is a type that represents references to methods with a specific parameter list and return type. In essence, it acts like a function pointer in C or C++, but is type-safe and secure. You can use delegates to pass methods as arguments to other methods, design callback methods, and define event handlers.

To use a delegate, you first declare it, which specifies the signature of the method it can point to. Then, you create instances of the delegate, assigning them methods that match its signature. You can invoke these methods through the delegate instance, either synchronously or asynchronously. This is especially useful for designing extensible and flexible code, where the method to be invoked can be decided at runtime.

No strings attached, free trial, fully vetted.

Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.

Nightfall illustration

4. Explain the difference between a delegate and an event.

A delegate in .NET is essentially a type-safe function pointer or a reference to a method. It allows you to encapsulate a method call in a variable, which means you can pass methods around like any other object. Delegates are declared using the delegate keyword and can point to methods that match their signature.

An event, on the other hand, provides a way to define a publisher/subscriber relationship between classes. It uses delegates internally, but it adds a layer of abstraction that enforces encapsulation. Events are typically used to signal that something has happened, and subscribers can register their handlers to be notified when that event occurs. The addition of an event keyword in C# makes it so that subscribers can't invoke the event directly; only the class that declares the event can raise it.

5. What is the purpose of the “using” statement in C#?

The "using" statement in C# is primarily used to ensure that resources are disposed of properly. It's typically used with types that implement the IDisposable interface, like file streams, database connections, or other unmanaged resources. When the "using" block is exited, the Dispose method is automatically called, even if an exception is thrown inside the block. This helps in managing memory and resource leaks efficiently. It's essentially a form of syntactic sugar for a try/finally block that handles disposal.

6. How do you handle exceptions in .NET?

I handle exceptions in .NET with a few simple rules:

  • Catch exceptions where I can actually do something useful
  • Prefer specific exception types over catching Exception everywhere
  • Log enough detail for debugging, but don’t leak internals to users
  • Let exceptions bubble up when the current layer can’t recover
  • Use finally, using, or await using for cleanup

A typical approach looks like this:

  1. Validate early for expected issues
    If something is predictable, like bad input or a missing argument, I use validation instead of relying on exceptions for normal control flow.

  2. Catch at the right layer
    For example:

  3. In a repository or API client, I might catch things like SqlException or HttpRequestException
  4. In the service layer, I may translate that into a business-friendly exception
  5. At the API boundary, I return the right status code and a safe error message

  6. Catch specific exceptions first
    I avoid broad catch (Exception) unless it’s at the top level for global handling, logging, or returning a generic error response.

  7. Preserve context
    If I rethrow, I use throw; instead of throw ex; so I don’t lose the original stack trace.

  8. Centralize cross-cutting handling
    In ASP.NET Core, I usually use middleware or exception filters for consistent logging and responses.

Example:

  • If I’m calling an external API, I’ll catch HttpRequestException
  • Log the failure with correlation details
  • Return a friendly message like, “We couldn’t complete the request right now”
  • If it’s not recoverable in that layer, I let it bubble up to centralized exception handling

I also use custom exceptions sparingly, mainly when they represent meaningful business cases, things like OrderNotFoundException or PaymentFailedException. That makes the code easier to reason about than throwing generic exceptions everywhere.

7. Describe the concept of Garbage Collection in .NET.

Garbage Collection in .NET is a process that automatically handles the allocation and release of memory in your applications. It’s designed to clean up and free memory that is no longer being used, which helps to prevent memory leaks and optimize memory usage.

The garbage collector works by dividing the managed heap into three generations: Gen 0, Gen 1, and Gen 2. It primarily focuses on objects that have a higher likelihood of becoming unreachable quickly, which are allocated in Gen 0. If an object survives the first round of garbage collection, it gets promoted to the next generation, and so on. This generational approach helps improve performance by minimizing the frequency and duration of garbage collection cycles.

You don't need to manually release memory for most objects; the garbage collector takes care of it. However, you can influence garbage collection through practices like implementing the IDisposable interface and explicitly calling the GC.Collect method, though the latter should be used judiciously.

8. What are generics in .NET and why would you use them?

Generics in .NET allow you to define classes, methods, delegates, or interfaces with a placeholder for the type of data they store or use. This means you can create a single class or method that works with different data types while maintaining type safety. For example, instead of creating multiple versions of a list to store integers, strings, or custom objects, you can create a generic list that adapts to whatever type you specify.

Using generics provides several benefits: improved code reuse, better compile-time type checking, and enhanced performance. They allow you to avoid boxing and unboxing for value types and the need to cast to and from object for reference types, which also reduces runtime errors.

User Check

Find your perfect mentor match

Get personalized mentor recommendations based on your goals and experience level

Start matching

9. Explain the concept of Dependency Injection in .NET.

Dependency Injection (DI) in .NET is a design pattern used to implement Inversion of Control (IoC) where the control of creating and managing dependencies is transferred from the class itself to an external entity. In practical terms, this means that instead of a class instantiating its dependencies directly, they are provided to the class via constructor parameters, properties, or method parameters. This approach promotes loose coupling and enhances testability and maintainability of code.

For example, consider a class ServiceA that depends on RepositoryA. Instead of creating an instance of RepositoryA inside ServiceA, you pass an instance of RepositoryA through the constructor of ServiceA. In .NET, the built-in DI container allows you to register your services and dependencies in the Startup.cs file (or Program.cs in .NET 6 and later). This way, when ServiceA is requested, the DI container injects the necessary dependencies automatically, making the code cleaner and easier to manage.

10. What is a NuGet package and how are they used in .NET projects?

A NuGet package is essentially a single ZIP file with a .nupkg extension that contains compiled code (DLLs), related metadata, and other resources like configuration files. It serves as a way to share and reuse code, enabling developers to add third-party libraries or even their own reusable components to their projects seamlessly.

In a .NET project, you manage NuGet packages using the NuGet Package Manager in Visual Studio, the dotnet CLI, or the NuGet CLI. You can search for packages, install them, and manage their dependencies. Once installed, a NuGet package will automatically add the necessary references to your project, making it effortless to incorporate and utilize external libraries.

11. What are extension methods in C#?

Extension methods in C# allow you to add new methods to existing types without modifying their source code. They are static methods but are called as if they were instance methods on the extended type. To create an extension method, you define a static class and then add static methods within it. The first parameter of each method specifies the type it extends, prefixed with the this keyword. They're particularly useful for enhancing classes you don't have direct control over, like .NET built-in types.

12. What are different types of authentication in ASP.NET?

In ASP.NET, the main authentication types I’d call out are:

  • Windows Authentication
    Uses the user’s Windows or Active Directory identity.
    Best fit for internal apps on a corporate network, where users are already signed in.

  • Forms Authentication
    Classic web app approach. The user logs in through a custom login page, and the app tracks the authenticated session, usually with a cookie.
    Common in older ASP.NET MVC and Web Forms apps.

  • Cookie Authentication
    Very common in ASP.NET Core. After login, the app issues an auth cookie, and that cookie is sent on later requests.
    Good for server-rendered web apps.

  • Token-based Authentication
    The app issues a token, often a JWT, and the client sends it with each request.
    This is the go-to for APIs, SPAs, and mobile apps because it works well in stateless systems.

  • OAuth
    More about delegated authorization than direct authentication, but people often mention it here.
    It lets users sign in with providers like Google, Microsoft, or Facebook.

  • OpenID Connect
    This is the authentication layer commonly used on top of OAuth 2.0.
    If you want social login or single sign-on, this is usually the more accurate protocol to mention.

  • Identity-based Authentication
    In ASP.NET Core, ASP.NET Core Identity is the membership system that helps manage users, passwords, roles, MFA, lockouts, and login flows.
    It is not a protocol itself, but it is often part of how authentication is implemented.

If I were answering in an interview, I’d also mention this distinction:

  • Authentication answers, "Who are you?"
  • Authorization answers, "What are you allowed to do?"

That usually shows you understand the bigger picture, not just the list.

13. What are partial classes in C#?

Partial classes in C# allow you to split the definition of a class across multiple files. This can be particularly useful in large projects where splitting the class file can improve manageability and readability. When compiled, all the parts are combined into a single class by the compiler. Typically, this feature is used in scenarios involving auto-generated code and manual code, allowing developers to work on one part without interfering with the other.

14. How do you perform data validation in an ASP.NET MVC application?

In ASP.NET MVC, I usually think about validation in three layers, model, server, and client.

  1. Model-level validation
    This is the most common starting point. I add Data Annotations on view model properties, things like:

  2. Required

  3. StringLength
  4. Range
  5. RegularExpression
  6. Compare

That keeps the rules close to the data and makes them easy to maintain.

  1. Server-side validation
    Server-side validation is the one that really matters, because client-side checks can always be bypassed.

In the controller, after model binding, I check ModelState.IsValid. If it is not valid, I return the same view and show the validation messages.

  1. Client-side validation
    For a better user experience, I enable unobtrusive client-side validation. That gives users immediate feedback in the browser using jQuery validation, based on the same Data Annotation rules.

For more complex cases, I use:

  • Custom validation attributes, if I need reusable validation logic
  • IValidatableObject, if validation depends on multiple fields
  • Manual ModelState.AddModelError(...), if a rule comes from business logic or a database check

A simple example would be a registration form:

  • Email is marked as required and must match email format
  • Password has minimum length rules
  • ConfirmPassword uses Compare to match the password
  • If the email already exists in the database, I add a ModelState error in the controller or service layer

So overall, I use Data Annotations for standard rules, ModelState on the server for enforcement, and client-side validation for a smoother UX.

15. Explain the difference between an IEnumerable and an IQueryable.

IEnumerable is used for querying data from in-memory collections like arrays or lists. It works well for LINQ-to-Objects queries and offers deferred execution, meaning the query is only executed when you iterate over it.

IQueryable, on the other hand, is designed for querying data from out-of-memory sources like databases. It enables LINQ-to-SQL or Entity Framework functionality, allowing you to execute queries on a remote datastore. Because IQueryable builds an expression tree, it lets the backend provider decide how to translate and execute the query, optimizing performance.

16. What is the purpose of the IDisposable interface?

The IDisposable interface is used to release unmanaged resources like file handles, database connections, or memory allocated outside the .NET runtime. Implementing IDisposable ensures that these resources are properly cleaned up, which helps prevent resource leaks and improves application performance. The core method in this interface is Dispose(), which should be called when the object is no longer needed. This way, you can explicitly control the cleanup process, rather than relying on the garbage collector, which might not immediately reclaim unmanaged resources.

17. Explain the difference between method overriding and method overloading.

Method overriding in C# occurs when a subclass provides a specific implementation for a method that is already defined in its superclass. The overridden method in the subclass should have the same signature as the method in the parent class, and it uses the 'override' keyword.

Method overloading, on the other hand, happens within the same class and involves creating multiple methods with the same name but different parameters (either in type, number, or both). Overloading increases the readability of the program and allows different versions of a method to be called based on the argument types and numbers.

So, overriding is about redefining a method in a subclass to change or extend its behavior, while overloading is about having multiple methods with the same name but different signatures within the same class.

18. How does middleware work in ASP.NET Core?

Middleware in ASP.NET Core is part of the request pipeline and is used to handle requests and responses. Essentially, each piece of middleware sits in the pipeline to process incoming requests and can forward these requests to the next middleware in the sequence or terminate the request right there. This allows for a highly flexible and modular approach to request processing.

When a request hits the server, it goes through a series of middleware components, each potentially modifying the request or performing specific tasks like authentication, logging, or data compression. Middleware components are added to the pipeline in the Configure method of the Startup class using the app.Use method. The order in which middleware is added is crucial because it dictates the sequence of operations executed for each request.

19. Explain the concept of task parallel library (TPL) in .NET.

TPL, or the Task Parallel Library, is the .NET framework for doing concurrent and parallel work without managing threads directly.

At a high level, it gives you a better abstraction than raw Thread or manual ThreadPool usage.

What it does well:

  • Represents work as Task or Task<T>
  • Schedules that work efficiently on the thread pool
  • Makes it easier to run things in parallel
  • Supports continuations, cancellation, and exception handling
  • Works nicely with async and await

The key idea is this:

  • You focus on the work that needs to happen
  • TPL handles a lot of the low-level scheduling and coordination

For example:

  • Task.Run(...) is a simple way to queue CPU-bound work
  • Task.WhenAll(...) lets you wait for multiple tasks together
  • Parallel.For and Parallel.ForEach help with data parallelism when the same operation needs to run across many items

Why it’s useful:

  • Less manual thread management
  • Better scalability
  • Cleaner, more maintainable code
  • Easier error propagation compared to raw threads

One important distinction:

  • Use TPL for parallel and task-based work
  • Use async and await for non-blocking asynchronous flows, often I/O-bound operations
  • In modern .NET, they often work together

A simple way to explain it in an interview is:

“TPL is .NET’s task-based concurrency model. It lets you run and coordinate work using Task objects instead of managing threads yourself. It’s useful for parallel execution, background processing, continuations, cancellation, and exception handling, and it’s a big improvement over working directly with Thread or ThreadPool.”

20. Explain the concept of threading in .NET.

Threading in .NET is basically about letting your app do more than one thing at a time without everything blocking on a single path of execution.

A simple way to think about it:

  • A Thread is an actual execution path
  • The thread pool is a shared pool of worker threads managed by .NET
  • A Task is a higher-level way to represent work that may run on a thread pool thread
  • async/await is the preferred way to handle non-blocking I/O, like database calls, HTTP requests, or file access

The important distinction is this:

  • CPU-bound work, like heavy calculations, may use multiple threads to speed things up
  • I/O-bound work, like waiting on an API or database, usually does not need a dedicated thread while it waits

In practice, most modern .NET code uses:

  • Task
  • async/await
  • TPL features like Parallel.ForEach for parallel CPU work when it actually makes sense

Manual Thread creation still exists, but it is less common because it is lower level and harder to manage.

The main benefits of threading are:

  • Better responsiveness, especially in UI apps
  • Better throughput in server apps
  • Better use of system resources for parallel work

The main risks are:

  • Race conditions, when multiple threads touch shared data at the same time
  • Deadlocks, when threads wait on each other forever
  • Contention and performance overhead from too much locking or too many threads

So if I were explaining my approach in a .NET app, I would say:

  • Use async/await for I/O-bound operations
  • Use Task and parallel processing carefully for CPU-bound work
  • Avoid manual thread management unless there is a specific reason
  • Be careful with shared state, synchronization, and cancellation

That shows you understand both the concept and how it is used in real production code.

21. How do you handle deadlocks in a multi-threaded environment?

Handling deadlocks in a multi-threaded environment starts with prevention through careful design. One effective method is to acquire locks in a consistent order across all threads, which helps avoid circular wait conditions. Also, introducing a timeout mechanism while acquiring locks can help detect potential deadlocks. If a thread can't obtain a lock within the specified timeout, it can roll back its operations and retry, thus breaking the deadlock.

Another strategy is to minimize the scope and duration of locks, reducing the chances of contention. Additionally, using higher-level concurrency constructs, like the Task Parallel Library in .NET, can help manage complex synchronization scenarios more safely than manual threading and locking.

Finally, proper monitoring and logging are crucial. Logging can help detect deadlocks early in the development cycle by capturing details about threads and locks. Tools like Visual Studio's concurrency profiler can also be used to visualize and diagnose deadlocks in running applications.

22. What is Razor syntax and how is it used in ASP.NET?

Razor is the view syntax ASP.NET uses to mix C# with HTML.

The main idea is simple:

  • You write normal HTML
  • When you need server-side values or logic, you use @
  • ASP.NET processes that Razor code on the server, then sends plain HTML to the browser

You’ll mostly see it in:

  • ASP.NET MVC views
  • ASP.NET Core MVC views
  • Razor Pages
  • Files like .cshtml

A few common uses:

  • Output a value, like @Model.Name
  • Run simple conditions, like showing a message only if a user is logged in
  • Loop through data and render lists or tables
  • Reuse layout files, partial views, and sections

Why it’s useful:

  • Keeps views clean and readable
  • Makes HTML and C# work naturally together
  • Helps separate UI rendering from business logic
  • Reduces a lot of noisy syntax compared to older ASP.NET view engines

Example, if you have a list of products, Razor lets you loop through Model.Products in the view and generate an <li> for each one. So instead of building HTML in code, you keep the markup in the view where it belongs, and just drop in the bits of C# needed to render dynamic data.

23. How do you secure an ASP.NET application?

I think about ASP.NET security in layers. Not just login, but the full path from the browser to the database to deployment.

A solid way to answer this is:

  1. Start with authentication and authorization
  2. Cover common web attack protections
  3. Mention secure configuration and secrets
  4. Add monitoring, patching, and operational practices

Then give a practical example of how you apply it.

For me, the main areas are:

  • Use HTTPS everywhere
  • Enforce TLS
  • Redirect HTTP to HTTPS
  • Turn on HSTS in production

  • Strong authentication and authorization

  • Use ASP.NET Identity, OpenID Connect, or Azure AD depending on the app
  • Require MFA for sensitive systems
  • Use role-based or policy-based authorization
  • Follow least privilege, users and services only get what they need

  • Protect against common attacks

  • Validate all input
  • Use model validation on incoming requests
  • Avoid building raw SQL, use EF Core or parameterized queries
  • Encode output to reduce XSS risk
  • Use anti-forgery protection for form posts to prevent CSRF

  • Secure cookies and sessions

  • Mark cookies as HttpOnly and Secure
  • Set appropriate SameSite policies
  • Keep session lifetime reasonable
  • Don’t store sensitive data in the client

  • Secrets and configuration

  • Never hardcode connection strings, API keys, or passwords
  • Use environment variables, user secrets for local dev, and something like Azure Key Vault in production
  • Separate config by environment

  • Error handling and logging

  • Don’t expose stack traces or internal details to users
  • Return safe error messages
  • Log security events like failed logins, access denials, and unexpected exceptions
  • Monitor logs and alerts

  • Keep dependencies and platform updated

  • Patch .NET, NuGet packages, OS, and containers regularly
  • Run dependency and vulnerability scans in CI/CD

  • Add security headers where appropriate

  • Content-Security-Policy
  • X-Content-Type-Options
  • X-Frame-Options or frame-ancestors
  • Referrer-Policy

  • Secure APIs too

  • Validate JWTs correctly
  • Check issuer, audience, expiry, and signing keys
  • Rate limit public endpoints
  • Lock down CORS to only trusted origins

A concrete example:

On a recent internal app, I secured it by:

  • Integrating with Azure AD for SSO and MFA
  • Using policy-based authorization for admin-only actions
  • Enforcing HTTPS and secure cookies
  • Storing secrets in Key Vault
  • Using EF Core instead of raw SQL
  • Adding anti-forgery validation on MVC form actions
  • Hiding detailed exceptions outside development
  • Sending auth failures and suspicious activity into centralized logging

That gave us a good baseline, and then we reviewed it during testing with automated scans and a quick manual security checklist before release.

24. What is the difference between an abstract class and an interface in C#?

I’d explain it like this:

An interface is a contract.

  • It says, "any class that implements me must provide these members"
  • It’s best for defining capabilities, like ILogger, IDisposable, or IEnumerable
  • A class can implement multiple interfaces

An abstract class is a base type with shared behavior.

  • It lets you define common logic once and reuse it
  • It can have abstract members that derived classes must implement
  • It can also have concrete methods, fields, properties, and constructors
  • A class can inherit from only one class, abstract or not

The practical difference is usually this:

  • Use an interface when you want to describe what something can do
  • Use an abstract class when you want to share code and state across related types

Quick example:

  • interface IVehicle might require Start() and Stop()
  • abstract class Vehicle might store shared data like Speed and provide common logic for acceleration

One extra note, in modern C#, interfaces can also have default implementations. Even with that, I still think of interfaces as contracts first, and abstract classes as a tool for inheritance and shared behavior.

25. What is a singleton pattern and how do you implement it in .NET?

A singleton is a pattern where a class can only have one instance, and the whole app uses that same instance.

You usually use it for things like:

  • configuration access
  • caching
  • logging
  • shared infrastructure services

In .NET, the basic implementation is:

  • make the class sealed, so it cannot be inherited
  • make the constructor private, so nobody can create it directly
  • expose a single static instance through a property or field

A clean version looks like this in practice:

  • private static readonly Singleton _instance = new Singleton();
  • private Singleton() { }
  • public static Singleton Instance => _instance;

Why this works:

  • static readonly creates one instance for the lifetime of the app domain
  • the private constructor blocks new Singleton() from outside the class
  • the static property gives controlled global access

One important point in .NET:

  • this eager initialization approach is thread-safe by default
  • if you want lazy creation, use Lazy<T> instead of writing your own locking unless you really need to

Example idea:

  • private static readonly Lazy<MyService> _instance = new(() => new MyService());
  • public static MyService Instance => _instance.Value;

One practical note, in modern .NET apps, especially with ASP.NET Core, I would usually prefer dependency injection over a classic singleton pattern. If I need one shared instance, I register it as a singleton in the DI container. That gives me the same lifecycle benefit, but with better testability and cleaner design.

26. What is the difference between synchronous and asynchronous programming in .NET?

The easiest way to explain it is this:

  • Synchronous code waits
  • Asynchronous code does not block while waiting

In synchronous programming, work happens step by step.

  • Line 2 waits for line 1 to finish
  • If one operation takes 5 seconds, everything behind it waits too
  • Simple to follow, but it can slow down apps and block threads

Example: - Read a file - Wait until it finishes - Then call the database - Wait again - Then return the response

Asynchronous programming is different. It lets your app start an operation, like a database call or HTTP request, and use that time more efficiently instead of just sitting there waiting.

In .NET, this usually means using async and await.

  • async marks a method that contains asynchronous work
  • await pauses that method until the operation completes
  • The key point is, it does not block the thread while waiting

That matters most for I/O-bound work, like:

  • API calls
  • Database queries
  • File reads and writes
  • Network operations

Why it matters:

  • Better responsiveness in UI apps
  • Better scalability in web apps
  • More efficient thread usage

One important point, async does not mean faster CPU work by default.

  • If the job is CPU-heavy, async alone will not make it faster
  • It mainly helps when your app is waiting on external resources

So in an interview, I would frame it like this:

  • Synchronous = do one thing at a time, and wait before moving on
  • Asynchronous = start work, free up the thread while waiting, then continue when the result is ready

In .NET, asynchronous programming is especially useful in ASP.NET Core, where freeing up threads during I/O helps the app handle more requests under load.

27. Explain the concept of Managed Code.

Managed code is code that runs under the .NET runtime, specifically the CLR.

In simple terms:

  • You write code in C#, VB.NET, or F#
  • It gets compiled into IL, intermediate language
  • At runtime, the CLR turns that into native machine code
  • While it runs, the CLR manages a lot of the heavy lifting

What the CLR handles for you:

  • Memory management and garbage collection
  • Type safety
  • Exception handling
  • Security checks
  • Thread management
  • Runtime diagnostics

Why it matters:

  • Less manual memory cleanup
  • Fewer low-level bugs
  • Safer, more reliable applications
  • Easier development compared to unmanaged code

A simple way to explain it in an interview is:

  • Managed code is "runtime-supervised" code
  • Unmanaged code is code that runs more directly on the OS, where you handle more things yourself

Example:

  • A typical C# application is managed code
  • A traditional C or C++ native application is usually unmanaged code

So if I had to say it naturally in an interview:

"Managed code is code that executes under the control of the CLR in .NET. Instead of compiling straight to native machine code, it first compiles to IL, and then the runtime converts it when the app runs. The big advantage is that the CLR provides services like garbage collection, type checking, and exception handling, which makes the application safer and easier to maintain."

28. What are assemblies in .NET, and what types do they come in?

I’d answer this by defining what an assembly is first, then listing the common types and why they matter.

An assembly in .NET is the compiled unit that the CLR loads and runs.

It usually contains: - IL code, the compiled intermediate code - Metadata, like type information and references - A manifest, which includes version, culture, and assembly identity - Optional resources, like images or localized strings

So in practice, an assembly is both: - a deployment unit - a versioning unit

Common assembly types:

  • Private assembly
  • Used by a single application
  • Usually lives in that app’s folder
  • Best when the dependency is only meant for that one app

  • Shared assembly

  • Used by multiple applications
  • Historically stored in the GAC
  • Needs a strong name so it can be uniquely identified

You can also mention the physical forms: - DLL, a class library or reusable component - EXE, an executable assembly

If I wanted to keep it interview-friendly, I’d say:

“An assembly is the basic compiled unit in .NET that the runtime loads. It contains the code plus metadata and manifest information for versioning and identity. The main types are private assemblies, which are used by one app, and shared assemblies, which can be used across multiple apps, traditionally through the GAC.”

29. What is LINQ and why is it useful?

LINQ is basically a way to query and shape data directly in C#.

Instead of writing a bunch of nested loops, temp variables, and if statements, you can do things like:

  • filter data with Where
  • sort it with OrderBy
  • pick specific fields with Select
  • group data with GroupBy
  • check conditions with Any, All, FirstOrDefault

Why it’s useful:

  • It makes code cleaner and easier to read.
  • It’s strongly typed, so you get compile-time checks and IntelliSense.
  • It works across different data sources, like in-memory collections, databases via Entity Framework, and XML.
  • It reduces boilerplate code, which usually means fewer bugs.
  • It encourages a more declarative style, you say what you want, not every step to get there.

A simple example is filtering a list of customers to find active ones, then sorting by name. With LINQ, that becomes a couple of readable method calls instead of a manual loop.

In interviews, I’d describe it as, “LINQ is C#’s built-in querying feature for collections and other data sources. It’s useful because it makes data operations more readable, concise, and type-safe.”

30. Explain the differences between .NET Core, .NET Framework, and .NET 5+.

I’d explain it by comparing them across three things: platform support, current usage, and where Microsoft is investing.

  • .NET Framework
  • This is the original .NET stack.
  • It’s Windows-only.
  • Best known for older ASP.NET, WinForms, WPF, and enterprise apps that have been around for years.
  • It’s mature and stable, but basically in maintenance mode now. Microsoft still supports it, but new innovation is happening elsewhere.

  • .NET Core

  • This was Microsoft’s rebuild of .NET for modern development.
  • It’s cross-platform, so it runs on Windows, Linux, and macOS.
  • It was designed to be faster, lighter, and better for cloud apps, APIs, containers, and microservices.
  • It introduced the modern direction of .NET, but the name stopped after .NET Core 3.1.

  • .NET 5+

  • This is the next step after .NET Core.
  • Starting with .NET 5, Microsoft dropped the word “Core” and moved to one unified platform: .NET 5, .NET 6, .NET 7, .NET 8, and so on.
  • It keeps the cross-platform and performance benefits of .NET Core, and it’s now the main platform for new development.
  • If I’m starting a new project today, this is the default choice.

The simple way to remember it:

  1. .NET Framework = legacy, Windows-only
  2. .NET Core = modern, cross-platform foundation
  3. .NET 5+ = current unified .NET, and the future going forward

In practice, if a company has an older internal app, it might still be on .NET Framework. If they’re building new APIs, cloud services, or modern apps, they’re usually on .NET 6+ or newer.

31. How does ASP.NET Core improve upon previous versions?

I’d answer this by grouping the improvements into a few buckets:

  1. Platform and hosting
  2. Performance and architecture
  3. Developer experience
  4. Modern app support

Then I’d give a few concrete examples instead of listing every feature.

A solid answer would be:

ASP.NET Core is a big step forward from classic ASP.NET because it was built for modern application development.

Some of the biggest improvements are:

  • Cross-platform support
    It runs on Windows, Linux, and macOS, which gives teams much more flexibility in development and deployment.

  • Much better performance
    ASP.NET Core is lighter, faster, and designed for high-throughput web apps and APIs. Kestrel also made hosting more efficient.

  • Modular architecture
    Instead of bringing in a huge framework by default, you only add the packages and middleware you need. That keeps apps leaner and easier to maintain.

  • Built-in dependency injection
    In older ASP.NET, DI often required third-party tools and extra setup. In ASP.NET Core, it’s part of the framework, which makes application design cleaner and more testable.

  • Unified framework
    MVC and Web API were brought into a more consistent model, so building web apps and REST APIs feels much more streamlined.

  • Better cloud and container readiness
    ASP.NET Core was designed with cloud deployment in mind, so it works well with Docker, Kubernetes, environment-based config, and scalable hosting.

  • Cleaner configuration and middleware pipeline
    Things like routing, authentication, logging, and error handling are more explicit and easier to control through the request pipeline.

  • Open source and faster evolution
    Because it’s open source and maintained publicly, improvements happen faster and the ecosystem is more transparent.

If I wanted to keep it very concise in an interview, I’d say:

“ASP.NET Core improved on older ASP.NET by being cross-platform, faster, more modular, and more cloud-friendly. It also introduced built-in dependency injection, a cleaner middleware pipeline, and a more unified way to build MVC apps and APIs.”

32. Can you explain the difference between value types and reference types in C#, and how that impacts memory allocation and performance?

In C#, the core difference is this:

  • Value types store the actual data.
  • Reference types store a reference, or pointer-like handle, to the data.

That affects copying, memory allocation, nullability, and performance.

  1. Value types

Examples: - int - double - bool - char - struct - enum - DateTime - Guid

How they behave: - The variable contains the value itself. - Assigning one value-type variable to another copies the data. - Each variable gets its own independent copy.

Example: - int a = 5; - int b = a; - Changing b does not affect a.

Memory: - Often allocated inline, meaning directly inside the containing object or stack frame. - People say "value types live on the stack", but that is not always true. - If a value type is a field in a class, it lives inside that object on the heap. - If it is boxed, it gets wrapped in a heap object.

  1. Reference types

Examples: - class - string - array - delegate - object

How they behave: - The variable holds a reference to an object. - Assigning one reference variable to another copies the reference, not the object. - Two variables can point to the same object.

Example: - var p1 = new Person(); - var p2 = p1; - Modifying p2.Name also affects p1.Name, because both references point to the same object.

Memory: - The actual object is typically allocated on the heap. - The reference itself is just a small value stored wherever the variable lives, stack, field, etc. - Heap objects are managed by the garbage collector.

  1. Copying behavior

This is the interview-friendly distinction:

  • Value types: copy the value
  • Reference types: copy the reference

That means: - Value type copies are isolated - Reference type copies share the same underlying object

  1. Performance impact

Value types can be faster when: - They are small - They are short-lived - You want to avoid heap allocation - You want better cache locality

Why: - No separate object allocation - Less garbage collection pressure - Data can be packed more tightly

But value types can be worse when: - They are large structs - They get copied a lot - They are boxed frequently

A large struct passed around by value can be expensive because every assignment or method call may copy all its fields.

Reference types can be better when: - The object is large - You want to share one instance - You need polymorphism - You want to avoid repeatedly copying large data

But they come with: - Heap allocation cost - Garbage collection overhead - Potentially worse memory locality

  1. Boxing and unboxing

Important interview topic.

Boxing: - Converts a value type to object or an interface it implements - Creates a heap allocation

Unboxing: - Extracts the value type back out - Requires a cast

Why it matters: - Boxing adds allocation and GC pressure - It can hurt performance in tight loops

Example cases: - Putting int into a non-generic collection like ArrayList - Passing a struct as object

Generic collections like List<int> avoid boxing.

  1. Nullability

  2. Value types are non-null by default, int, bool

  3. They can be made nullable with ?, like int?
  4. Reference types can be null, though nullable reference types help express intent in modern C#

  5. Strings are a special case

string is a reference type, but it behaves value-like in some ways because: - It is immutable - Reassigning a string variable does not modify the original string object

Still, technically, it is a reference type allocated on the heap.

  1. Practical rule of thumb

Use a struct when: - The type is small - Represents a single value - Is immutable or mostly immutable - Does not need inheritance

Use a class when: - The type is larger or more complex - Needs identity - Needs inheritance or shared mutable state

  1. Good interview answer in one line

If you want a crisp answer:

  • Value types hold their data directly and are copied by value.
  • Reference types hold references to heap objects and are copied by reference.
  • This affects allocation, GC pressure, copying cost, and overall performance.

If they push deeper, mention: - Value types are not always on the stack - Reference variables are not the object itself - Boxing is a common performance trap - Large structs can be slower than classes due to copy cost

A strong practical example is comparing Point as a small struct versus Person as a class: - Point is tiny and value-like, struct makes sense - Person has identity and shared state, class makes sense

33. Describe a time when you had to debug a difficult issue in a .NET application. How did you approach the problem, and what was the outcome?

A strong way to answer this is with a simple STAR structure:

  • Situation, what was happening?
  • Task, what were you responsible for?
  • Action, how did you investigate and fix it?
  • Result, what changed afterward?

For debugging questions, interviewers usually want to hear that you were methodical, data-driven, and calm under pressure. So I’d focus on:

  • How you narrowed the scope
  • What tools or logs you used
  • How you validated the root cause
  • What you changed to prevent it from happening again

Here’s how I’d answer it:

In one project, we had a .NET API that would intermittently slow down and sometimes time out during peak business hours. The tricky part was that it never happened in lower environments, and there were no obvious exceptions in the application logs.

I was responsible for figuring out whether the issue was in our code, the database, or infrastructure. My first step was to avoid guessing and gather better data. I added more structured logging around the slow endpoint, including correlation IDs, execution timing for key steps, and the specific database calls being made. I also used Application Insights to trace requests end-to-end and compare successful versus failing requests.

Once I had that visibility, I noticed the slowdown was always tied to one EF Core query. On the surface, the query looked fine, but under production-sized data it was generating inefficient SQL and pulling back far more data than expected because of an Include chain and a filter being applied too late.

To confirm it, I reproduced the issue against a copy of production-like data, captured the generated SQL, and reviewed the execution plan with our DBA. That showed a table scan on a high-traffic table and a big spike in memory usage.

The fix was a combination of changes: - I rewrote the query to project only the fields we actually needed - I removed unnecessary Includes - I moved filtering earlier in the query - We added an index to support the access pattern

After that, the endpoint response time dropped from several seconds to a few hundred milliseconds, and the timeouts stopped. As a follow-up, I added query performance logging and a load test around that workflow so we could catch similar regressions before release.

What I like about that example is it shows a structured debugging approach, not just a lucky fix.

34. If a deployed .NET API starts returning intermittent 500 errors under heavy load, how would you investigate and resolve the issue?

I’d handle this in two parts, how to answer it in an interview, and what I’d actually do.

How to structure the answer: 1. Stabilize first, reduce user impact. 2. Gather evidence, don’t guess. 3. Isolate the bottleneck, app, DB, external dependency, infrastructure. 4. Fix the immediate issue. 5. Add protections so it does not happen again.

A strong answer sounds calm and methodical, especially under pressure.

What I’d do:

  1. Triage and contain
  2. Check current impact, error rate, affected endpoints, start time, traffic spike, deployment correlation.
  3. If needed, mitigate fast:
  4. Roll back the latest release if errors started after deployment.
  5. Scale out instances if it looks like resource saturation.
  6. Enable rate limiting or temporarily reduce noncritical traffic.
  7. Fail over or degrade gracefully for expensive features.

  8. Look at telemetry first I’d go straight to observability tools, App Insights, Datadog, Grafana, ELK, whatever the team uses.

I’d check: - Exception details, stack traces, inner exceptions. - Request rate, response times, failure percentage. - CPU, memory, thread pool usage, GC activity. - DB connection pool usage, query duration, deadlocks, timeouts. - Outbound HTTP dependency failures, latency, socket exhaustion signs. - Pod or VM restarts, container OOM kills, ingress or load balancer errors.

The key is to correlate the 500s with a resource or dependency signal.

  1. Reproduce or narrow it down
  2. Try to identify whether it happens only under load, only on certain endpoints, or only on certain nodes.
  3. Compare healthy vs unhealthy instances.
  4. Run a load test in staging or a safe environment using realistic traffic.
  5. If I can, capture dumps, traces, or profiling data during the issue window.

At this stage I’m trying to answer, is it code, config, infrastructure, or a downstream dependency?

  1. Common .NET-specific causes I’d investigate Under heavy load, the usual suspects are:

  2. Thread pool starvation

  3. Symptoms: rising latency, requests queueing, CPU not necessarily maxed.
  4. Causes: blocking calls like .Result, .Wait(), sync I/O in request path.
  5. Fix: make hot paths fully async, remove blocking, review middleware and filters.

  6. Db connection pool exhaustion

  7. Symptoms: intermittent timeouts, more failures as concurrency rises.
  8. Causes: too many concurrent queries, leaked connections, slow queries.
  9. Fix: ensure connections are disposed properly, optimize queries and indexes, reduce chatty data access, tune pool settings carefully.

  10. HttpClient misuse, socket exhaustion

  11. Symptoms: outbound calls fail intermittently under load.
  12. Cause: creating new HttpClient() per request.
  13. Fix: use IHttpClientFactory, set sensible timeouts, retries, circuit breakers.

  14. Memory pressure or GC pauses

  15. Symptoms: high memory, Gen 2 collections, OOM, restarts.
  16. Causes: large object allocations, caching too much, serialization overhead.
  17. Fix: reduce allocations, stream large payloads, cap cache size, review object lifetimes.

  18. Lock contention or shared state issues

  19. Symptoms: requests slow or fail only at concurrency.
  20. Causes: static mutable state, locks around hot paths.
  21. Fix: remove shared mutable state, use concurrent collections or redesign contention points.

  22. Unhandled exceptions from edge cases

  23. Symptoms: only certain payloads or timing conditions fail.
  24. Fix: inspect logs by endpoint, payload shape, user path; add validation and better exception handling.

  25. Check recent changes I’d always ask:

  26. What changed recently, code, infrastructure, secrets, autoscaling rules, DB config?
  27. Did traffic shape change, batch jobs, new clients, larger payloads?

A lot of production incidents come from a small change that only breaks at scale.

  1. Implement the fix Depending on root cause, examples:
  2. Replace sync blocking calls with async all the way down.
  3. Add IHttpClientFactory and Polly policies for retries, timeout, circuit breaker.
  4. Optimize a slow SQL query, add an index, or reduce N+1 calls.
  5. Increase instance count or DB tier as a short-term mitigation.
  6. Tune Kestrel, thread pool, or connection pool only after confirming the bottleneck.
  7. Add backpressure, queueing, or rate limiting for bursty workloads.

  8. Verify in production After the change:

  9. Watch error rate drop.
  10. Confirm latency, throughput, and resource metrics normalize.
  11. Make sure the fix works under repeated load testing, not just at normal traffic.

  12. Prevent recurrence I’d add:

  13. Better structured logging with correlation IDs.
  14. Dashboards and alerts on 500 rate, dependency latency, saturation metrics.
  15. Load and soak tests in CI/CD for critical endpoints.
  16. Health checks and synthetic monitoring.
  17. Resilience patterns, retries with jitter, circuit breakers, bulkheads, caching where appropriate.
  18. A blameless postmortem with action items.

If I wanted to make this sound strong in an interview, I’d say something like:

“My first step is to reduce customer impact, rollback, scale out, or rate limit if needed. Then I’d use telemetry to correlate the 500s with app exceptions, resource saturation, or dependency failures. In a .NET API under heavy load, I’d specifically check for thread pool starvation, DB connection pool exhaustion, HttpClient misuse, memory pressure, and slow downstream calls. Once I isolate the bottleneck, I’d implement the fix, validate it with load testing, and add monitoring and resilience so it does not happen again.”

If you want, I can also turn this into a 60-second interview answer.

35. What is the difference between an interface-based service registration with transient, scoped, and singleton lifetimes in ASP.NET Core, and when would you use each?

In ASP.NET Core DI, interface-based registration means you register a contract to an implementation, like IEmailSender -> SmtpEmailSender.

Example: - services.AddTransient<IEmailSender, SmtpEmailSender>() - services.AddScoped<IEmailSender, SmtpEmailSender>() - services.AddSingleton<IEmailSender, SmtpEmailSender>()

The difference is the lifetime of the created object.

  1. Transient
  2. A new instance is created every time it is requested.
  3. If two classes ask for IEmailSender, they each get separate instances.
  4. Even within the same HTTP request, multiple resolutions mean multiple objects.

Use it for: - Lightweight, stateless services - Services with no shared state - Pure business logic helpers

Examples: - Formatters - Mappers - Calculation services - Small domain services

Watch out for: - Too many object creations if the service is expensive - Injecting transient into singleton can be okay, but you need to understand it gets created when the singleton is built if resolved there

  1. Scoped
  2. One instance per scope
  3. In web apps, that usually means one instance per HTTP request
  4. Everything within the same request gets the same instance
  5. A new request gets a new instance

Use it for: - Services that should share request-specific state - Services that work with DbContext - Unit of work style services

Examples: - Repository or application services that use EF Core - Current-user context services - Request-level caching services

Most common real-world choice: - Scoped is often the default for services that access the database

Watch out for: - Never inject scoped services directly into singletons - That causes lifetime mismatch and can lead to runtime errors or incorrect behavior

  1. Singleton
  2. One instance for the whole application lifetime
  3. Created once, reused everywhere
  4. Same instance across all requests and users

Use it for: - Stateless services that are expensive to create - Shared infrastructure - App-wide caches or configuration-like services

Examples: - In-memory cache wrappers - Precomputed lookup services - Services holding reusable expensive resources - Custom configuration providers

Watch out for: - Must be thread-safe - Should not depend on scoped services - If it stores mutable state, that state is shared across all users and requests

How I usually choose: - Transient for simple stateless logic - Scoped for request-based work, especially anything touching EF Core - Singleton for shared, thread-safe, app-wide services

A practical mental model: - Transient = new every time - Scoped = once per request - Singleton = once per app

Example interview answer: “I register services by interface so the implementation is decoupled from consumers and easy to swap or test. Then I choose the lifetime based on how long the instance should live. Transient creates a new instance every resolution, so I use it for lightweight stateless services. Scoped creates one instance per request, so it’s ideal for database-related services and request-specific state. Singleton creates one instance for the whole app lifetime, so I use it for shared, thread-safe services like caches or expensive reusable components. The main thing I watch for is lifetime mismatches, especially not injecting scoped services into singletons.”

36. How do you handle disagreements within a development team when deciding between different implementation approaches in a .NET solution?

I’d handle that by balancing engineering judgment with team alignment.

A good way to answer this in an interview is:

  1. Start with your principle, disagreement is healthy if it stays focused on the problem.
  2. Explain your process, clarify goals, compare tradeoffs, involve the right people, decide, then commit.
  3. Give a real example where there were competing approaches and you helped the team move forward.

My approach:

  • First, I try to remove ego from the conversation.
  • I bring it back to, what are we optimizing for?
  • For example, maintainability, performance, delivery speed, cost, team familiarity, or scalability.

  • Then I make the options explicit.

  • In a .NET solution, that might be:

    • layered monolith vs microservices
    • IQueryable exposure vs strict repository boundaries
    • EF Core convenience vs hand-tuned SQL for a hot path
    • MediatR/CQRS pattern vs simpler service-based design
  • I like to compare options against a few agreed criteria:

  • business impact
  • complexity
  • long-term maintenance
  • testability
  • performance
  • security and operational risk
  • how reversible the decision is

  • If the team is stuck in opinions, I push for evidence.

  • Build a small spike
  • Measure performance
  • Review production constraints
  • Look at support and debugging implications
  • Check how well each option fits the existing architecture

  • If there is still no consensus, I use a decision owner model.

  • Usually the tech lead or architect makes the call after hearing input.
  • Once a decision is made, I commit fully, even if my preferred option was not chosen.

  • After implementation, I like to revisit the decision.

  • If the chosen approach caused pain, we capture that and adjust next time.
  • That keeps disagreement productive instead of political.

Example answer:

“In development teams, I try to treat disagreement as a good sign, because it usually means people care about quality. My first step is to align on what matters most for that decision. In a .NET project, that could be delivery speed, clean architecture, performance, or ease of testing.

On one project, we had a disagreement about whether a new reporting feature should be built using our standard EF Core service pattern or a separate optimized query approach with stored procedures. One group wanted consistency with the rest of the codebase, the other was worried about query performance because the dataset was large.

I helped structure the discussion around tradeoffs instead of preferences. We agreed on criteria like response time, maintainability, and implementation effort. Then we did a small spike with both approaches. The results showed EF Core was fine for most of the application, but this specific reporting endpoint performed much better with a targeted SQL-based solution.

We decided to keep the main architecture consistent, but allow an exception for that hot path. That gave us the performance we needed without overcomplicating the whole system. The key was making the decision based on evidence, documenting why we chose it, and making sure the team aligned behind it afterward.”

That answer works well because it shows: - collaboration - technical maturity - pragmatism - ability to handle conflict without becoming rigid or personal

37. What is Entity Framework and how does it work?

Entity Framework, usually EF or EF Core, is Microsoft’s ORM for .NET.

At a high level, it lets you work with your database using C# objects instead of hand-writing SQL for everything.

How it works:

  • Your C# classes represent your data, like Customer, Order, or Product
  • EF maps those classes to database tables
  • Class properties map to columns
  • Relationships like one-to-many or many-to-many map to foreign keys

Then you use a DbContext, which acts like the bridge between your app and the database.

With that, you can:

  • Query data with LINQ
  • Insert and update objects
  • Delete records
  • Track changes automatically
  • Save everything with SaveChanges()

What happens under the hood:

  1. You write a LINQ query in C#
  2. EF translates that into SQL
  3. The database executes it
  4. EF turns the results back into .NET objects

Example:

  • If you write something like context.Orders.Where(o => o.CustomerId == 1), EF converts that into a SQL SELECT
  • If you change a property on an object EF is tracking, it figures out the right UPDATE statement when you call SaveChanges()

Why people like it:

  • Less boilerplate data access code
  • Strongly typed queries
  • Easier to maintain than lots of raw SQL
  • Works well with domain models and business logic
  • Supports migrations, so schema changes can be versioned in code

One important nuance, EF is really productive, but it is not magic.

You still need to understand:

  • How queries are translated
  • When data is loaded, eager, lazy, explicit
  • Performance issues like N+1 queries
  • When raw SQL or stored procedures make more sense

So my short version is, Entity Framework is a tool that lets .NET developers talk to relational databases through C# objects and LINQ, while EF handles the mapping, SQL generation, and change tracking behind the scenes.

38. What is a Web API and how do you create one in .NET?

A Web API is basically a backend service that exposes data or business actions over HTTP.

Clients call it using URLs and HTTP verbs like:

  • GET to read data
  • POST to create something
  • PUT or PATCH to update
  • DELETE to remove

In .NET, you usually build this with ASP.NET Core Web API. It lets browsers, mobile apps, frontend SPAs, and other services talk to your application in a standard way, usually with JSON.

How I’d explain creating one in .NET:

  1. Create the project
  2. Start with the ASP.NET Core Web API template in Visual Studio or with the dotnet new webapi command.
  3. That gives you the basic setup, routing, config, and often Swagger out of the box.

  4. Define endpoints

  5. Add a controller, or use minimal APIs.
  6. In a controller-based API, you create a class like ProductsController.
  7. Then add actions mapped to routes and verbs, like GET /api/products or POST /api/products.

  8. Add your business logic

  9. Usually the controller calls a service layer.
  10. The service layer talks to the database, often through Entity Framework Core or a repository.

  11. Return HTTP responses

  12. You return things like 200 OK, 201 Created, 400 Bad Request, 404 Not Found.
  13. The API usually sends JSON back to the client.

  14. Test and document it

  15. Swagger is commonly included, so you can run the app and try endpoints in the browser.
  16. You can also test with Postman or curl.

A simple real-world example: - GET /api/orders/123 returns order details - POST /api/orders creates a new order - DELETE /api/orders/123 removes it

If I were answering in an interview, I’d keep it practical: “A Web API is an HTTP-based interface that lets other systems or frontends interact with your application. In .NET, I’d usually create one with ASP.NET Core Web API, define controllers or minimal API endpoints, wire in services and data access, and expose REST-style routes that return JSON. Then I’d test it with Swagger or Postman.”

39. Tell me about a .NET project you worked on where you had to improve application performance or scalability. What did you identify, and what changes did you make?

For this kind of question, I’d answer it with a simple structure:

  1. Context, what the app did and why performance mattered.
  2. Problem, what signals showed it was slow or not scaling.
  3. Actions, what I measured, identified, and changed.
  4. Results, what improved in numbers.
  5. Reflection, what I learned or would do next.

A concrete example:

I worked on a .NET API for an internal order processing platform. It handled product lookup, pricing, and order submission for several downstream systems. Usage grew pretty quickly, and during peak hours we started seeing slow response times, timeouts, and higher CPU on the app servers.

What I identified:

  • Average API latency was creeping up, especially on a few read-heavy endpoints.
  • SQL Server was under pressure, lots of repeated queries and some expensive joins.
  • App servers were making too many synchronous calls to downstream services.
  • We also had a classic N+1 query issue in a couple of EF Core data access paths.
  • Thread pool starvation showed up occasionally because some older code paths were still blocking on async work.

How I approached it:

  • I started with measurement first, not guessing.
  • Used Application Insights and logging to trace slow endpoints.
  • Profiled SQL queries and checked execution plans.
  • Added timing around service calls and repository methods.
  • Looked at percentiles, not just average response time, because p95 was much worse than the mean.

Changes I made:

  • Reworked a few EF Core queries to avoid N+1 problems, using proper projection with Select() instead of loading full entity graphs.
  • Added database indexes for the most common filter and join columns.
  • Introduced caching for relatively static reference data, first with in-memory caching, then for shared scenarios with Redis.
  • Converted blocking I/O code to true async end-to-end, especially around HTTP and database access.
  • Batched some downstream service calls instead of making many small sequential requests.
  • Tightened payloads returned by the API so we weren’t serializing large objects the client didn’t need.
  • For one high-volume workflow, I moved non-critical post-processing into a background queue so the API could return faster.

The impact:

  • One of the worst endpoints dropped from around 2.5 to 3 seconds down to under 500 ms in normal conditions.
  • Overall database load dropped noticeably because of fewer repeated reads and better query plans.
  • Throughput improved enough that we could handle peak traffic with fewer scale-out events.
  • Timeout-related incidents during peak periods dropped a lot.

What I liked about that project was that it reinforced a good performance habit, measure first, fix the biggest bottleneck, then re-measure. A lot of the gain did not come from one dramatic change, it came from cleaning up several small inefficiencies across the request path.

40. How would you design and implement logging and monitoring for a production ASP.NET Core application?

I’d answer this in layers: goals, architecture, implementation, operations.

  1. Start with the goal

In production, logging and monitoring should help with three things:

  • Detect issues fast
  • Diagnose root cause quickly
  • Measure system health and business impact

So I’d design for:

  • Structured logs
  • Centralized collection
  • Correlation across requests and services
  • Metrics and alerting
  • Distributed tracing
  • Safe handling of sensitive data

  • Logging design

I’d use structured logging, not plain text.

That means every log event is machine-queryable with properties like:

  • Timestamp
  • Level
  • MessageTemplate
  • RequestId
  • TraceId
  • UserId, if safe
  • TenantId, if applicable
  • Environment
  • ServiceName
  • Exception
  • Domain-specific fields like OrderId, PaymentId

In ASP.NET Core, I’d usually use:

  • Built-in ILogger<T> everywhere in the app
  • Serilog as the logging provider for structured output and flexible sinks

Typical sinks:

  • Console, for containerized environments
  • Central log platform like Elasticsearch/Kibana, Seq, Datadog, Splunk, or Azure Monitor

  • Log levels strategy

I’d be intentional about log levels:

  • Information for normal important app events, app start, request completed, order placed
  • Warning for unexpected but recoverable situations, retries, validation anomalies, downstream slowness
  • Error for failed operations
  • Critical for app-wide failures, startup failure, database unavailable
  • Debug and Trace only when needed, usually disabled in production by default

A common mistake is logging too much at Information and creating noise. I prefer fewer, high-value logs.

  1. What I would log

I’d log key lifecycle and business events:

  • Application startup and shutdown
  • Incoming HTTP requests and response status codes
  • Unhandled exceptions
  • Calls to external dependencies, DB, APIs, message brokers
  • Authentication and authorization failures
  • Background job execution
  • Important business events, checkout completed, invoice generated

I would avoid:

  • Logging passwords, tokens, secrets
  • Logging full request bodies by default
  • Logging sensitive PII unless explicitly required and masked

  • Correlation and tracing

This is huge in production systems.

Every request should have a correlation ID or trace ID so I can follow it across:

  • API gateway
  • ASP.NET Core app
  • downstream APIs
  • queues
  • background workers
  • database calls

Best modern approach:

  • Use OpenTelemetry for tracing and metrics
  • Propagate W3C trace context headers
  • Enrich logs with TraceId and SpanId

That lets me jump from an alert, to a trace, to the exact logs for that request.

  1. Monitoring design

I’d split monitoring into three pillars:

  • Logs, for event detail
  • Metrics, for trends and alerting
  • Traces, for request flow and latency breakdown

Key application metrics:

  • Request rate
  • Error rate
  • Latency, p50, p95, p99
  • CPU and memory
  • GC activity
  • Thread pool saturation
  • DB query duration and failure rate
  • External API latency and failure rate
  • Queue length, if async
  • Background job success/failure counts

Business metrics too, if relevant:

  • Orders per minute
  • Payment failures
  • Signup conversion
  • Inventory sync lag

  • Alerting strategy

I’d avoid alerting directly on raw logs too much. Metrics are better for alerts.

Examples of useful alerts:

  • Error rate above threshold for 5 minutes
  • p95 latency above SLA
  • Health check failing
  • Dependency timeout spike
  • Pod/container restart loop
  • Queue backlog growing
  • No traffic when there should be traffic

I’d make alerts actionable, not noisy.

Each alert should answer:

  • What is broken
  • How severe it is
  • Which service/environment is affected
  • Where to start investigating

  • Implementation in ASP.NET Core

At a practical level, I’d do this:

Logging: - Register Serilog early in Program.cs - Read config from appsettings.*.json - Write to console in JSON format - Enrich logs with machine name, environment, app name, trace IDs

Use ILogger<T> in services/controllers: - Log at boundaries and important business actions - Use message templates like logger.LogInformation("Order {OrderId} created for customer {CustomerId}", orderId, customerId); - Always log exceptions with the exception object, not just the message

Middleware: - Add request logging middleware - Capture request method, path, status code, duration - Optionally exclude noisy endpoints like /health

Global exception handling: - Use exception handling middleware - Return safe error responses to clients - Log the full exception internally with correlation info

Health checks: - Add ASP.NET Core health checks for: - App liveness - DB connectivity - External dependencies, if appropriate - Expose /health/live and /health/ready

OpenTelemetry: - Instrument ASP.NET Core, HttpClient, and database access - Export traces and metrics to something like Jaeger, Grafana, Azure Monitor, Datadog, etc.

  1. Example architecture

A common production setup could be:

  • ASP.NET Core app emits:
  • Structured logs to stdout
  • Metrics and traces via OpenTelemetry
  • Kubernetes or App Service collects stdout
  • Logs go to ELK, Seq, or Azure Monitor
  • Metrics go to Prometheus/Grafana or cloud monitoring
  • Traces go to Jaeger, Tempo, Application Insights, or Datadog
  • Alerts route through PagerDuty, Opsgenie, Teams, or Slack

  • Operational practices

I’d also mention that design is only half the story. Operations matter.

I’d put in place:

  • Log retention policies
  • Sampling for high-volume traces/logs
  • Dashboards per service
  • Runbooks linked from alerts
  • Environment tagging, dev, test, prod
  • Version tagging so we can correlate issues to deployments
  • Redaction/masking policy for sensitive data
  • Periodic review of noisy logs and noisy alerts

  • What I’d say in an interview as a concise answer

I’d design logging and monitoring around observability. In the ASP.NET Core app, I’d use ILogger<T> with structured logging, usually backed by Serilog, and send logs to a centralized platform. I’d enrich every log with correlation data like TraceId, environment, and service name, and I’d be strict about not logging secrets or sensitive payloads.

For monitoring, I’d use OpenTelemetry for metrics and distributed tracing, instrumenting ASP.NET Core, HttpClient, and database calls. I’d track request rate, error rate, latency percentiles, dependency failures, resource usage, and key business metrics. I’d expose health checks for liveness and readiness, build dashboards, and create actionable alerts on symptoms like high error rate, slow p95 latency, failed health checks, and growing queue backlog.

The main principle is, when production breaks, I want to go from alert, to trace, to correlated logs, to root cause fast.

Get Interview Coaching from .NET Experts

Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.

Complete your .NET interview preparation

Comprehensive support to help you succeed at every stage of your interview journey

Still not convinced? Don't just take our word for it

We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.

Find .NET Interview Coaches