C# Interview Questions

Master your next C# interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.

Find mentors at
Airbnb
Amazon
Meta
Microsoft
Spotify
Uber

Master C# interviews with expert guidance

Prepare for your C# interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.

Thousands of mentors available

Flexible program structures

Free trial

Personal chats

1-on-1 calls

97% satisfaction rate

Study Mode

Choose your preferred way to study these interview questions

1. How do async and await work under the hood, and what mistakes have you seen developers make when using them?

async and await are compiler features, not magic. The compiler rewrites an async method into a state machine. When it hits await, it checks whether the awaited Task is done. If not, it returns control to the caller, stores the method state, and schedules the continuation to resume later. By default, it also captures the current SynchronizationContext or TaskScheduler, which is why UI apps resume on the UI thread.

Common mistakes I see: - Blocking on async with .Result or .Wait(), which can deadlock and kills scalability. - Using async void outside event handlers, because exceptions are hard to observe. - Forgetting to await a task, creating fire-and-forget bugs. - Assuming async makes code faster, it mostly improves responsiveness and throughput. - Not using ConfigureAwait(false) in library code when context capture is unnecessary. - Starting too many tasks at once and overwhelming I/O, DB, or thread pool resources.

2. How do you design a C# service or class to be testable?

I design for testability by reducing hidden dependencies and making behavior easy to isolate.

  • Depend on abstractions, inject collaborators through constructors, like IClock, IRepository, ILogger.
  • Keep classes focused on one responsibility, smaller classes are easier to unit test.
  • Separate pure business logic from I/O, database, file system, HTTP, and time.
  • Avoid static state, singletons, and new inside core logic, they make mocking and control harder.
  • Return clear outputs and use deterministic inputs, so tests are stable and readable.
  • Put side effects at the edges, orchestration in services, rules in domain classes.
  • Design async properly with Task and cancellation tokens, so tests can verify behavior cleanly.

In practice, if a pricing service needs current time and customer data, I inject both instead of calling DateTime.Now or hitting EF directly. That lets me test pricing rules with simple fakes.

3. What are the main differences between value types and reference types in C#, and how has that affected your design decisions?

In C#, value types hold the actual data, usually live on the stack or inline, and assignment copies the value. Reference types hold a reference to an object on the heap, and assignment copies the reference, not the object. Value types are great for small, immutable data like DateTime, decimal, or custom structs. Reference types make more sense when you need shared state, inheritance, or larger, more complex objects.

In design, that changes how I model data. I use struct only when the type is small, logically a single value, and should not be null by default. Otherwise I prefer classes, because copying large structs can hurt performance and mutable structs are error-prone. I also think about equality, boxing, and nullability early, because those affect correctness and API usability.

No strings attached, free trial, fully vetted.

Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.

Nightfall illustration

4. How does garbage collection work in .NET, and what are some common situations where memory issues can still occur in C# applications?

.NET uses a generational garbage collector. New objects go into Gen 0, survivors move to Gen 1 and Gen 2, and long-lived large objects may go on the Large Object Heap. The GC tracks reachable objects from roots like stack variables, statics, and CPU registers. When memory pressure rises, it pauses managed threads, marks live objects, reclaims unreachable ones, and may compact memory to reduce fragmentation. This is automatic, but it is not instant and it only manages managed memory.

Memory issues still happen in common cases: - Event subscriptions keep objects alive if you forget to unsubscribe. - Static fields, caches, and singletons can grow forever. - IDisposable objects like streams or database connections leak unmanaged resources if not disposed. - Large allocations, especially strings and byte arrays, can fragment the LOH and spike memory. - Closures, timers, and background tasks can accidentally capture references longer than intended. - Native interop, pinned objects, and unsafe code bypass normal GC safety.

5. Can you explain deferred execution in LINQ and describe a bug or performance problem it can cause?

Deferred execution means a LINQ query is not run when you define it, it runs when you enumerate it, like with foreach, ToList(), Count(), or First(). That is useful because it avoids unnecessary work, supports composition, and can reflect the latest state of the source collection.

A common bug is accidental multiple enumeration. Example: you build var adults = people.Where(p => ExpensiveCheck(p)); and then call adults.Count() and later adults.ToList(). The filter runs twice, which can hurt performance or repeat side effects. With EF, it can mean multiple database queries. Another issue is source mutation: if the collection changes after the query is defined but before it is enumerated, results may differ from what you expected. If you want a snapshot, materialize once with ToList() or ToArray().

6. What is the difference between struct, class, and record struct, and how do you decide which one to use?

Here’s the practical way to think about it:

  • class is a reference type, good when you want identity, inheritance, shared mutation, or nullability.
  • struct is a value type, copied by value, best for small, immutable, value-like data such as Point, DateTime, or Guid.
  • record struct is also a value type, but it gives you record-style features like value-based equality, nicer ToString(), and concise syntax.

How I choose: - Use class for entities or objects with lifecycle, identity, or larger mutable state. - Use struct for tiny data types where copying is cheap and value semantics make sense. - Use record struct when it is value-like data and you want built-in equality and concise modeling. - Avoid large or mutable structs, because copies can be expensive and bug-prone. - If in doubt, default to class, then optimize only when value semantics are clearly the right fit.

7. What are some differences between LINQ to Objects and LINQ providers like Entity Framework?

The big difference is where the query runs and what can be translated.

  • LINQ to Objects works in memory on IEnumerable<T>, using normal .NET methods and delegates.
  • Entity Framework works on IQueryable<T>, building an expression tree that gets translated to SQL.
  • With LINQ to Objects, any C# method can run. With EF, only translatable expressions work, otherwise you get runtime exceptions or client-side evaluation.
  • LINQ to Objects uses CLR semantics. EF uses database semantics, so things like null handling, string comparison, and date functions can behave differently.
  • Performance is different too. Objects means data is already loaded. EF means query shape matters, because Where, Select, joins, and Include affect generated SQL and database round trips.

A practical rule, use LINQ to Objects for in-memory collections, and EF LINQ carefully for database queries.

8. What is the difference between IEnumerable, ICollection, IList, and IQueryable, and when would you choose each?

They build on each other, and the main difference is what they promise: just iteration, collection semantics, list semantics, or query translation.

  • IEnumerable<T>: simplest, forward-only iteration. Use when callers only need to read/loop, and you want the least coupling.
  • ICollection<T>: adds Count, Add, Remove, Clear. Use when you need basic mutable collection behavior but not indexing.
  • IList<T>: adds index access like list[0], Insert, RemoveAt. Use when order and positional access matter.
  • IQueryable<T>: represents a query that can be translated by a provider, like EF to SQL. Use for database queries you want executed remotely.

Rule of thumb: expose the narrowest interface you need. For in-memory data, prefer IEnumerable<T> or IReadOnlyList<T> if indexing matters. For EF, keep IQueryable<T> inside the data layer unless the caller truly needs to compose database queries.

User Check

Find your perfect mentor match

Get personalized mentor recommendations based on your goals and experience level

Start matching

9. What is boxing and unboxing, and why can it matter in performance-sensitive C# code?

Boxing is when a value type like int or struct gets wrapped as an object or interface, which puts it on the heap. Unboxing is the reverse, pulling the value type back out, with an explicit cast.

Why it matters in hot paths: - Boxing allocates memory on the heap, so it adds GC pressure. - Unboxing needs a cast and type check, so there is extra CPU work. - It often happens accidentally, like with non-generic collections such as ArrayList, or APIs typed as object. - In tight loops or high-throughput code, those small costs add up fast. - Generics help avoid it, for example List<int> stores int directly without boxing.

Example: assigning int x = 5; object o = x; boxes x. Then int y = (int)o; unboxes it.

10. What is the purpose of the using statement and using declarations, and how do they relate to IDisposable and IAsyncDisposable?

They exist to guarantee cleanup of resources, even if an exception happens. In C#, using works with types that implement IDisposable, and await using works with types that implement IAsyncDisposable.

  • using statement creates a scope, disposal happens at the end of that block.
  • using declaration is shorter, disposal happens at the end of the containing scope.
  • Both compile down to a try/finally, calling Dispose() automatically.
  • Use them for unmanaged or scarce resources, files, streams, DB connections, readers.
  • IDisposable is for synchronous cleanup, IAsyncDisposable is for async cleanup like flushing or network I/O.

A practical way to say it in an interview: use using when lifetime should be tightly scoped, use a using declaration when you want cleaner code but still deterministic disposal. Use await using when cleanup itself is asynchronous.

11. How do nullable reference types change the way you write and review C# code?

Nullable reference types make nullability explicit, so I treat null as part of the type contract instead of a runtime surprise. It changes both how I write APIs and how I review intent.

  • I declare intent clearly, string means never null, string? means maybe null.
  • I initialize required members, use constructors well, and avoid leaving objects half-valid.
  • I add guard clauses at boundaries, but stop doing defensive null checks everywhere internally.
  • I use flow analysis features like ?., ??, ! sparingly, and review ! very critically.
  • In reviews, I check annotations match reality, especially DTOs, EF models, and public APIs.
  • I look for warning fixes that hide bugs versus ones that improve the design.
  • For legacy code, I enable it gradually, project by project, and treat warnings as refactoring guides.

12. What are records in C#, and when would you choose a record over a class or struct?

Records are reference types, unless you use record struct, built for modeling data. Their big value is value-based equality, concise syntax, and nondestructive mutation with with. Two records with the same property values compare equal, which is different from normal classes, where equality is usually by reference.

I’d choose a record when: - The type represents data, not identity or behavior-heavy objects. - You want immutability or mostly immutable objects. - Value equality matters, like DTOs, messages, configs, or API contracts. - You want cleaner syntax, especially for constructor-style models.

I’d choose a class when identity, lifecycle, or mutable state matters. I’d choose a struct for tiny, value-like types where copy semantics are intentional and allocations matter. For domain entities like Customer, class is usually better. For Address or OrderCreated, record often fits nicely.

13. Can you walk me through your experience with C# and the kinds of applications you’ve built with it?

I’ve used C# mainly for backend and full-stack work, and I like it because it scales well from simple APIs to larger distributed systems. Most of my experience is with .NET Core and .NET 6+, building services that are clean, testable, and easy to deploy.

  • Built REST APIs with ASP.NET Core, Entity Framework, SQL Server, and Redis.
  • Worked on internal business apps, dashboards, and workflow tools with Blazor and some MVC.
  • Built background processing with hosted services, queues, and scheduled jobs.
  • Integrated third-party systems using HTTP clients, OAuth, webhooks, and message brokers.
  • Focused a lot on architecture, dependency injection, async programming, logging, and unit testing.

One example, I built a multi-tenant API for operational reporting. I handled auth, data access patterns, caching, and performance tuning, then deployed it with CI/CD and containerized it for cloud hosting.

14. What is the difference between Task, ValueTask, and void-returning async methods, and when is each appropriate?

Task, ValueTask, and async void mainly differ in how they represent completion, errors, and allocations.

  • Task is the default choice. It is awaitable, composable, can be stored, passed around, and exceptions flow correctly.
  • ValueTask is an optimization for hot paths where the result is often available synchronously, avoiding some allocations.
  • ValueTask has tradeoffs, it is harder to use correctly, should usually be awaited only once, and adds complexity.
  • async void should almost never be used, because callers cannot await it and exceptions are harder to observe.
  • Use async void mainly for event handlers, like UI button click methods.

Rule of thumb, return Task for almost everything, Task<T> for async results, ValueTask<T> only after profiling shows it helps, and void only for true event handlers.

15. How do delegates, events, and lambda expressions relate to each other in C#?

They’re closely connected. A delegate is the type-safe function signature, an event is a controlled way to expose delegate-based notifications, and a lambda is just a concise way to create the delegate target.

  • Delegate: defines a method shape, like Func<int, int> or a custom delegate void Notify(string msg).
  • Event: usually built on a delegate, lets other code subscribe with += and unsubscribe with -=.
  • Lambda: shorthand for an anonymous method, like x => x * 2, often assigned to a delegate or used as an event handler.
  • Relationship: events use delegates under the hood, and lambdas are a common way to supply the delegate implementation.
  • Practical example: button.Click += (s, e) => Console.WriteLine("Clicked"); where Click is an event, backed by a delegate type, and the lambda is the handler.

16. What is the difference between abstract classes and interfaces, especially after default interface methods were introduced?

Both define contracts, but I choose them for different reasons.

  • abstract class is for shared state and behavior, a base type with partial implementation, fields, constructors, protected members, and non-public logic.
  • interface is for capabilities, a public contract that unrelated types can implement, and a class can implement many interfaces.
  • Abstract classes support single inheritance only, interfaces avoid that limitation.
  • Default interface methods let interfaces include reusable method bodies, mainly for versioning and small shared behavior.
  • They still cannot hold instance fields or constructors, so they are not a true replacement for abstract classes.

Rule of thumb, use an interface when you want flexibility and multiple implementations. Use an abstract class when derived types share core data, lifecycle, or protected implementation details.

17. How do extension methods work, and when do they improve code clarity versus hide important behavior?

Extension methods are static methods that the compiler lets you call like instance methods. You define them in a static class, mark the first parameter with this, and then myString.IsValid() gets compiled as MyExtensions.IsValid(myString). They do not actually modify the type, they just add nicer syntax.

  • They improve clarity when they are small, obvious, and domain-friendly, like string.IsNullOrWhiteSpace()-style helpers.
  • They work well for fluent pipelines, especially with LINQ, where chaining reads naturally.
  • They start to hide behavior when they do expensive work, hit a database, mutate state, or throw surprising exceptions.
  • They can also hurt readability if the method name looks like a built-in capability of the type but really comes from some random namespace.
  • My rule is, use them for discoverable convenience, not for side effects or business-critical logic that deserves explicit dependencies.

18. How do generics improve type safety and performance, and what are the limitations of generics in C#?

Generics improve type safety by moving checks to compile time. If you use List<int>, you cannot accidentally add a string, so you avoid invalid casts and a lot of runtime errors. They also make APIs cleaner because the type intent is explicit.

On performance, generics reduce boxing and unboxing for value types. For example, List<int> stores int directly, unlike old non-generic collections like ArrayList, which box values into object. That means less allocation and better speed.

Limitations: - C# generics are reified, but runtime type behavior still has limits. - You cannot use arithmetic operators on T without extra constraints or helpers. - You cannot instantiate T unless you use the new() constraint. - Some constraints are limited, you cannot express every possible type rule. - Reflection, variance, and nullable interactions can add complexity.

19. Can you explain covariance and contravariance in C#, with a practical example from your work?

Variance is about assignment compatibility for related generic types. Covariance means "more derived is okay" for outputs, contravariance means "less derived is okay" for inputs.

  • Covariance uses out, like IEnumerable<Dog> assigned to IEnumerable<Animal>, because you only read items.
  • Contravariance uses in, like Action<Animal> assigned to Action<Dog>, because a handler that accepts any Animal can also accept a Dog.
  • In practice, I used this in a messaging pipeline. IMessageHandler<in T> let a generic IMessageHandler<BaseMessage> handle OrderCreatedMessage without extra adapters.
  • That made registration simpler in DI and reduced duplicate handlers.
  • Key rule, if a type parameter is produced, use covariance. If it is consumed, use contravariance.

20. How do you handle exceptions in C# applications, and what do you consider a good exception-handling strategy?

I treat exceptions as part of the app’s reliability story, not just error cleanup. The goal is, fail fast on programmer mistakes, recover only when the failure is expected, and always leave good diagnostics.

  • Catch exceptions at the right boundary, not everywhere. Let lower layers throw, handle near API, UI, or job boundaries.
  • Use specific exceptions, avoid catching Exception unless you are logging and rethrowing or translating it.
  • Don’t use exceptions for normal control flow. Validate inputs and use result patterns when failure is expected.
  • Preserve stack traces with throw;, not throw ex;.
  • Log enough context, correlation IDs, user or request info, but avoid leaking sensitive data.

A solid strategy is global handling plus local intent: middleware or top-level handlers for unhandled exceptions, custom domain exceptions where useful, retries only for transient failures, and user-friendly messages while logs keep the technical detail.

21. What is the difference between throw and throw ex, and why does it matter?

The key difference is stack trace preservation.

  • throw; rethrows the current exception and keeps the original stack trace.
  • throw ex; throws the same exception object again, but resets the stack trace to the current catch block.
  • That matters because debugging gets much harder if you lose where the exception actually started.
  • Best practice: inside a catch, use throw; unless you are intentionally wrapping with a new exception.
  • If you need more context, do throw new SomeException("extra context", ex);

Example: if a repository method fails and you catch it in a service, throw; lets you see the repository line that broke. throw ex; makes it look like the error started in the service catch block, which hides the real source.

22. How do pattern matching features in modern C# improve readability or maintainability in real codebases?

Pattern matching makes branching logic more declarative, so the code says what shape of data you expect instead of how to inspect it step by step. That usually means fewer casts, fewer null checks, and less nesting.

  • is patterns replace manual as plus null checks, so intent is obvious.
  • switch expressions turn long if/else chains into compact, exhaustive rules.
  • Property and positional patterns let you match nested data without repetitive guard code.
  • Relational and logical patterns, like <, >, and, or, make validation rules read like business rules.
  • Exhaustiveness with enums or record hierarchies helps catch missing cases during refactoring.

In real codebases, this pays off when models evolve. If a new subtype or state gets added, pattern-based code tends to fail in clearer places, making maintenance safer and faster.

23. What are expression-bodied members, and where do you find them helpful or harmful?

Expression-bodied members are a compact C# syntax for members that can be written as a single expression, using =>. You’ll see them on methods, read-only properties, constructors, finalizers, indexers, and even property accessors in newer C# versions.

  • Helpful when the intent is tiny and obvious, like public int Count => _items.Count;
  • Great for simple wrappers, projections, calculated properties, or one-line methods
  • They reduce boilerplate, which can make DTOs and immutable types cleaner
  • Harmful when logic stops being truly simple, because debugging and readability get worse
  • I avoid them if I need multiple steps, branching, logging, exception handling, or comments

My rule is simple, if it reads naturally in one line, use it. If I have to mentally unpack it, I switch back to a normal block body.

24. How does dependency injection typically work in .NET applications, and how have you used it in C# projects?

In .NET, dependency injection usually means you register services in a container, then let the framework supply them where needed, usually through constructor injection. In ASP.NET Core, this is built in via IServiceCollection, with lifetimes like Transient, Scoped, and Singleton.

  • I usually register interfaces to implementations in Program.cs, like IEmailService to EmailService.
  • Controllers, services, and middleware receive dependencies through constructors, which keeps classes loosely coupled.
  • I use Scoped for request-based services, Singleton for stateless shared services, and Transient for lightweight, short-lived objects.
  • In C# projects, DI made unit testing much easier because I could swap real dependencies with mocks.
  • One example, I built an API where repositories and domain services were injected, which kept business logic clean and data access replaceable.

25. What are the differences between transient, scoped, and singleton lifetimes, and what bugs can come from choosing the wrong one?

In .NET dependency injection, the difference is all about how long an instance lives.

  • Transient: new instance every time it’s requested. Good for lightweight, stateless services.
  • Scoped: one instance per scope, usually per web request. Good for request-specific state, EF Core DbContext, unit-of-work patterns.
  • Singleton: one instance for the whole app lifetime. Good for shared, stateless, thread-safe services, config readers, caches.

Wrong lifetime choices cause subtle bugs:

  • Putting mutable state in a singleton can create race conditions and cross-request data leaks.
  • Injecting a scoped service into a singleton usually causes lifetime errors, or forces bad workarounds.
  • Making DbContext singleton is a classic bug, thread-safety issues, stale tracking, corrupted behavior.
  • Overusing transient for expensive objects can hurt performance and increase allocations.
  • Using scoped when you needed singleton can break caching or create inconsistent shared state.

26. What is your approach to debugging a C# production issue that you cannot reproduce locally?

I keep it methodical: stabilize first, increase observability, narrow the gap between prod and local, then test hypotheses safely.

  • Start with impact and scope, what broke, who is affected, and whether I need a rollback, feature flag, or mitigation first.
  • Pull evidence from logs, traces, metrics, correlation IDs, request payloads, and recent deploys or config changes.
  • Compare production-only differences, environment variables, secrets, data shape, traffic patterns, time zones, culture, OS, load, and dependency versions.
  • Add targeted telemetry, not noisy logging, around the suspected path and ship it behind a flag if needed.
  • Reproduce in a staging environment using production-like data and settings, or replay captured requests safely.
  • If still unclear, use live debugging tools carefully, dump analysis, dotnet-trace, dotnet-dump, or SQL profiling, then fix, verify, and add a regression test plus better alerts.

27. How have you used LINQ in performance-critical paths, and when have you decided not to use it?

I use LINQ a lot for readability, but in hot paths I treat it as a tradeoff, not a default. I’m fine with LINQ when the data size is modest, the query is clear, and profiling shows it is not a bottleneck. I’ve used it effectively for projections, filtering, grouping, and shaping data at boundaries, especially in service layers and reporting code.

  • In performance-critical loops, I avoid chained LINQ if it creates extra allocations, multiple enumeration, or delegate overhead.
  • I especially watch Where().Select().ToList() patterns inside per-request or per-item processing.
  • If profiling shows LINQ on the hot path, I switch to a plain for loop, pre-size collections, and reduce temporary objects.
  • I also avoid LINQ when I need tight control over short-circuiting, indexing, or mutation.
  • In one API, replacing nested LINQ with imperative loops cut CPU and allocations enough to noticeably improve p95 latency.

28. How do you make a class immutable in C#, and what tradeoffs come with immutability?

Make the object’s state set once, then never allow it to change.

  • Use readonly fields or get-only properties, and set them in the constructor.
  • Don’t expose setters, mutable fields, or methods that modify internal state.
  • If a property is a reference type, prefer immutable types like string, record, or immutable collections.
  • For mutable inputs like List<T>, make defensive copies on the way in, and often on the way out.
  • In modern C#, record and init properties are common tools, but init is only immutable after construction.

Tradeoffs are mostly about performance and ergonomics. Immutability makes code easier to reason about, thread-safe by default, and safer for caching or sharing. The downside is extra allocations, copying costs, and sometimes more verbose update patterns, since changes usually mean creating a new object instead of modifying the existing one.

29. What mocking strategies have you used in unit tests for C# code, and when do mocks become a problem?

I usually use mocks at boundaries, not everywhere. In C#, that means repositories, HTTP clients, message buses, clocks, file systems, and anything slow or nondeterministic. For core business logic, I prefer plain unit tests with real objects or simple fakes.

  • I’ve used Moq and NSubstitute for interface-based dependencies and behavior verification.
  • I prefer stubs or fakes when I only need canned data, they’re less brittle than interaction-heavy mocks.
  • For HttpClient, I mock HttpMessageHandler, or use a lightweight test server for higher confidence.
  • Mocks become a problem when tests verify implementation details, like exact call counts or call order.
  • They’re also a smell if you need many mocks to test one class, that usually means the design has too many responsibilities.
  • If tests break after harmless refactors, I replace mocks with fakes, real collaborators, or broader integration tests.

30. How do you unit test async methods, and what pitfalls should be avoided?

The main rule is, always test the Task, not the side effects around it. In C#, your test method should usually be async Task, then await the method under test so exceptions and timing behave correctly.

  • Use async test methods, like public async Task TestName(), not async void
  • Assert async exceptions with helpers like Assert.ThrowsAsync<Exception>(() => service.DoWorkAsync())
  • Mock async dependencies with ReturnsAsync(...) or Task.FromResult(...)
  • Avoid .Result, .Wait(), or .GetAwaiter().GetResult(), they can deadlock and hide real async behavior
  • Don’t add fake delays like Thread.Sleep, instead await real signals or use test doubles
  • Be careful with fire-and-forget code, it’s hard to observe and often needs refactoring behind an interface or scheduler

A common pitfall is writing a test that calls an async method without awaiting it. The test may pass even though the method later throws.

31. How do you avoid common Entity Framework issues such as N+1 queries, tracking overhead, or unintended client-side evaluation?

I avoid EF issues by being explicit about query shape, tracking, and what runs in SQL.

  • Prevent N+1 by eager loading with Include, or better, project with Select so EF fetches exactly what the API needs.
  • Turn off tracking for read-only work with AsNoTracking(), and use AsNoTrackingWithIdentityResolution() if related entities repeat.
  • Watch for client-side evaluation by keeping filters, joins, and aggregates in LINQ expressions EF can translate, then check generated SQL with ToQueryString().
  • Avoid loading huge graphs, paginate with Skip and Take, and prefer split queries when one big join causes Cartesian explosion.
  • Precompile hot queries with EF.CompileQuery if profiling shows repeated overhead.

In practice, I also enable SQL logging and review query plans early, because EF problems usually show up as bad SQL, not bad C#.

32. How do CancellationToken and cooperative cancellation work, and how do you propagate cancellation properly through a C# application?

CancellationToken is .NET’s way to stop async or long-running work cooperatively. It does not kill a thread. A caller creates a CancellationTokenSource, passes token down, and callee code checks token.IsCancellationRequested, calls token.ThrowIfCancellationRequested(), or passes the token into cancellable APIs like HttpClient, Task.Delay, EF Core, or streams.

To propagate it properly: - Accept a CancellationToken in every async method that does I/O or long work. - Pass it all the way down to dependencies, do not create new sources unless you need timeouts or linked tokens. - Use CancellationToken.None only when work truly must not be canceled. - If canceled, let OperationCanceledException bubble, do not wrap it as a generic failure. - In loops, check the token regularly and stop cleanly, releasing resources in finally.

In ASP.NET Core, usually start with HttpContext.RequestAborted and flow that through your services and repositories.

33. What is a deadlock in the context of C#, and how have you diagnosed or prevented one?

A deadlock is when two or more threads block each other forever, usually because each holds a lock the other needs. In C#, it also shows up with async code, like calling .Result or .Wait() on a task that needs the current synchronization context to continue.

  • I diagnose it by checking thread dumps in Visual Studio, looking at the Parallel Stacks and Threads windows, and identifying where threads are stuck on lock, Monitor, or task waits.
  • In production, I look for hung requests, no CPU activity, and logs that stop at lock acquisition points.
  • I prevent it by enforcing consistent lock ordering and keeping lock scope small.
  • I avoid blocking on async, using await instead of .Result or .Wait().
  • If shared state is complex, I prefer higher-level tools like ConcurrentDictionary, channels, or immutable objects.

34. Describe a time when you improved the performance of a C# service or API. What metrics did you use?

I’d answer this with a quick STAR structure: situation, what was slow, what I changed, and the measurable result.

At one company, a .NET API endpoint was timing out during peak traffic because it made several sequential database calls and did too much mapping in memory. - I profiled it with Application Insights and SQL query metrics, focusing on p95 latency, timeout rate, DB duration, and CPU. - I found N+1 queries and unnecessary ToList() materialization. - I consolidated queries, added the right indexes, switched some paths to async, and introduced response caching for rarely changing data. - I also reduced payload size and used projections instead of loading full entities. - Result, p95 dropped from about 2.8 seconds to 700 ms, timeout rate fell by over 80%, and CPU usage on the app nodes decreased around 25%.

35. What is the difference between First, FirstOrDefault, Single, and SingleOrDefault, and how do you choose the right one?

These LINQ methods differ in what they expect and how they fail:

  • First(), returns the first match, throws if there is no match.
  • FirstOrDefault(), returns the first match, or default like null if none.
  • Single(), expects exactly one match, throws if there are zero or more than one.
  • SingleOrDefault(), expects zero or one match, returns default if none, throws if more than one.

How I choose: - Use First when any one match is fine and at least one should exist. - Use FirstOrDefault when no match is acceptable. - Use Single when the data must be unique, like lookup by a unique key. - Use SingleOrDefault when uniqueness is expected, but absence is allowed.

Key idea, Single* enforces uniqueness, First* does not.

36. How do equality and hashing work in C#, and what should be considered when overriding Equals and GetHashCode?

In C#, equality has two common forms: reference equality and value equality. For reference types, == usually checks whether two variables point to the same object, unless the operator is overloaded. Equals is for logical equality, and value types often inherit field-by-field equality unless overridden.

  • If you override Equals, you should also override GetHashCode, because hash-based collections like Dictionary and HashSet rely on both.
  • Equal objects must return the same hash code. Unequal objects can still share a hash code.
  • Use immutable fields for equality when possible, because changing a hashed value after insertion breaks lookups.
  • Implement IEquatable<T> for strongly typed, faster equality checks.
  • Keep Equals reflexive, symmetric, transitive, and consistent, and handle null safely.

In modern C#, HashCode.Combine(...) is the usual way to build a good hash code.

37. What is the difference between == and Equals in C#, and how can operator overloading change expectations?

== and Equals can mean different things in C#.

  • == is an operator, its behavior depends on the type and whether the operator is overloaded.
  • Equals is a method, by default on object it checks reference equality, but many types override it for value equality.
  • For reference types, == usually means same object reference, unless the type overloads it, like string.
  • For value types, Equals usually compares values, and == may or may not be available unless defined.

Operator overloading changes expectations because a type can make == do value comparison instead of reference comparison. Example, two different instances of a Money class could return true for a == b if amount and currency match. Best practice is to keep ==, Equals, and GetHashCode consistent so collections and comparisons behave predictably.

38. What thread-safety concerns have you faced in C# applications, and how did you address them?

A few big ones come up a lot in C#:

  • Shared mutable state, multiple threads updating the same object, fixed with lock, ConcurrentDictionary, or making data immutable.
  • Race conditions, especially check-then-act logic, solved with atomic ops like Interlocked, or by moving the whole operation inside one lock.
  • UI thread issues in WPF/WinForms, background threads touching controls, handled by marshaling back with Dispatcher or SynchronizationContext.
  • Async deadlocks, usually from .Result or .Wait(), avoided by going async end-to-end and using ConfigureAwait(false) in library code.
  • Collection modification during enumeration, fixed by snapshotting with .ToList() or using concurrent collections.

One concrete case, we had duplicate order processing because two workers read the same status before either updated it. I fixed it by making the transition atomic in the database and adding an app-level lock keyed by order ID.

39. How do spans, memory types, or pooled objects help performance in modern C#?

They help by cutting allocations, reducing GC pressure, and avoiding unnecessary copies, which matters a lot in hot paths like parsing, networking, and serialization.

  • Span<T> and ReadOnlySpan<T> let you work with slices of arrays, strings, or stack memory without allocating new objects.
  • Memory<T> is the heap-friendly version of Span<T>, useful when data needs to live across async calls or be stored in fields.
  • stackalloc with spans can put small temporary buffers on the stack, which is very fast and avoids GC entirely.
  • ArrayPool<T> reuses large arrays instead of constantly allocating and collecting them, which helps throughput and latency.
  • Object pooling is similar for reusable objects like buffers, builders, or parsers, but you need to reset state carefully.

The tradeoff is complexity. These tools are best when profiling shows allocation or copying is a bottleneck.

40. What is the difference between lock, Monitor, Mutex, SemaphoreSlim, and ReaderWriterLockSlim, and when would you use each?

They all coordinate access, but at different levels and costs.

  • lock is syntax sugar over Monitor. Use it for simple in-process mutual exclusion around a short critical section. Best default for most cases.
  • Monitor gives more control than lock, like TryEnter, timeouts, and Wait/Pulse for thread coordination. Use it when you need that extra control.
  • Mutex is heavier and can work across processes via a named mutex. Use it when different processes must share exclusive access.
  • SemaphoreSlim limits concurrency instead of allowing only one thread. Great for throttling, like allowing 10 requests at a time. It also has good async support with WaitAsync.
  • ReaderWriterLockSlim allows many readers or one writer. Use it when reads are frequent, writes are rare, and contention is real.

Rule of thumb: start with lock, use SemaphoreSlim for async throttling, ReaderWriterLockSlim for read-heavy workloads, and Mutex only for cross-process scenarios.

41. What are reflection and attributes used for in C#, and what are the tradeoffs of relying on reflection-heavy designs?

Reflection is how C# code can inspect metadata and interact with types at runtime, like discovering properties, methods, generic arguments, or creating objects dynamically. Attributes are declarative metadata you attach to code, then read through reflection to change behavior. Common uses are serializers, ORMs, dependency injection, model validation, plugin loading, and test frameworks.

Tradeoffs: - Flexible and extensible, especially when behavior is driven by metadata. - Slower than direct code, because runtime inspection and invocation cost more. - Less compile-time safety, errors shift to runtime, like misspelled member names. - Harder to debug and maintain, because control flow becomes less obvious. - Can hurt trimming, AOT, and obfuscation scenarios unless carefully handled.

In interviews, I’d say use reflection at boundaries and framework code, not in hot paths or core domain logic.

42. How do you balance writing idiomatic modern C# with keeping code understandable for a mixed-experience team?

I treat readability as the first constraint, then use modern C# where it clearly improves signal, safety, or boilerplate. Idiomatic code is great, but if half the team has to stop and decode it, the value drops.

  • I favor features with obvious payoff, like nullable reference types, pattern matching, var when the type is clear, and records for simple data models.
  • I avoid stacking too many clever features together, like dense LINQ, nested patterns, or heavy expression syntax in core business logic.
  • I lean on team conventions, code reviews, and analyzers so style feels consistent, not personal.
  • I introduce newer language features gradually, usually with small examples in PRs or short internal docs.
  • My rule is simple, if a junior dev can debug it confidently in six months, it is probably the right level of modern.

43. How do source generators compare to reflection-based approaches in .NET, and where do you see them fitting in?

Source generators move work from runtime to compile time. Reflection is flexible and easy for discovery, but it costs startup time, allocations, trimming issues, and can be harder for AOT scenarios. Generators produce normal C# ahead of time, so you get better performance, stronger typing, and earlier errors in the build.

Where they fit: - High-volume serialization, DI wiring, routing, logging, mappers, and validation. - Native AOT, Blazor WebAssembly, mobile, and microservices where startup matters. - Framework or platform code, where conventions are known at compile time. - Cases where you want analyzers plus generated code for a better developer experience.

I would not replace reflection everywhere. If the shape is truly dynamic, plugin loading, unknown assemblies, user-defined types at runtime, reflection still wins. My rule is, use generators when metadata is knowable at build time and performance or AOT compatibility matters.

44. What is the purpose of assemblies, namespaces, and access modifiers in organizing C# code?

They each organize code at a different level, and together they keep a C# app maintainable and safe.

  • Assemblies are the physical deployment units, usually .dll or .exe, they package compiled code, metadata, and resources.
  • Namespaces are logical groupings inside the codebase, they prevent naming collisions and make types easier to find, like MyApp.Services.
  • Access modifiers control visibility, for example public, private, internal, and protected, so you expose only what other code should use.
  • A common way to explain it is, assembly = packaging boundary, namespace = naming boundary, access modifier = visibility boundary.
  • In practice, I use assemblies to separate projects, namespaces to reflect features or layers, and access modifiers to enforce encapsulation.

45. How do internals and the InternalsVisibleTo attribute affect testing and architecture decisions?

internal is a great tool for keeping your public API small while still organizing code into useful internal types. In testing, InternalsVisibleTo lets a test assembly access those internals, so you can validate important logic without making everything public.

  • Use internal to hide implementation details and reduce coupling.
  • Use InternalsVisibleTo("MyProject.Tests") when internals contain meaningful domain logic worth testing directly.
  • Avoid overusing it, if tests need lots of internals, the design may be too coupled or responsibilities are blurred.
  • Prefer testing through public behavior first, then expose internals to tests only when it improves clarity and speed.
  • Architecturally, it supports clean boundaries, your app exposes contracts publicly, and keeps supporting machinery private.

I usually treat InternalsVisibleTo as a pragmatic compromise, helpful, but also a signal to review whether the code should be refactored into a separate component with a clearer public surface.

46. Can you describe a time when you had to refactor a legacy C# codebase? What did you change first and why?

I’d answer this with a quick STAR structure: situation, task, actions, result, then focus on how I reduced risk before making bigger design changes.

At one job, I inherited a legacy .NET app with huge service classes, static helpers, and almost no tests. The first thing I changed was not business logic, it was safety and visibility. - I added characterization tests around the most critical workflows, so I could refactor without breaking hidden behavior. - Next, I isolated external dependencies, like file I/O and database calls, behind interfaces. - Then I broke apart one high pain class at a time, usually by extracting smaller services with single responsibilities. - I also removed duplicated logic and replaced magic strings and flags with enums and clear models. - Result, we cut production bugs and made new feature work much faster.

47. Tell me about a difficult bug you solved in a C# application and how you approached the investigation.

I’d answer this with a quick STAR structure, Situation, Task, Action, Result, and keep the focus on how I investigated, not just the fix.

At one job, we had a C# API that would randomly create duplicate orders under load, but we couldn’t reproduce it locally. I started by tightening logging around the request flow, correlation IDs, and database writes, then compared successful vs duplicate cases. That showed two concurrent requests were passing the same validation before either transaction committed. I reproduced it with a load test, confirmed a race condition, and fixed it with an idempotency key plus a database unique constraint. After deployment, duplicates dropped to zero, and we kept the extra telemetry so similar concurrency issues would be easier to trace.

48. Have you ever disagreed with a team’s approach to architecture or coding standards in a C# project? How did you handle it?

I handle that by separating preference from risk: clarify the goal, bring evidence, propose a small experiment, then commit once the team decides.

On one C# API project, the team wanted to put most business logic directly in controllers to move faster. I disagreed because it would make testing and reuse harder. I did not make it personal, I wrote up two options, a controller-heavy approach and a service-layer approach, and compared them on testability, change impact, and delivery time. Then I built a thin vertical slice both ways and showed that the service approach added very little overhead but gave us cleaner unit tests and easier maintenance. The team adopted the service layer. In cases where my view is not chosen, I still align fully and help make the agreed approach successful.

49. How do you review pull requests for C# code, and what issues do you pay special attention to?

I review PRs in layers: first correctness, then readability, then maintainability, then performance and safety. I also try to understand the intent from the ticket or description before I comment, so I do not nitpick something that was a deliberate tradeoff.

  • Correctness first, edge cases, null handling, async flow, cancellation tokens, exception paths, and whether tests actually prove behavior.
  • C# specifics, IDisposable usage, await instead of blocking calls, LINQ that is clear and not accidentally expensive, and nullable reference warnings.
  • Design, naming, class size, method cohesion, duplicated logic, leaking abstractions, and whether dependencies are easy to mock or replace.
  • Data and security, SQL injection risks, validation, authorization checks, secrets in config, logging sensitive data, and serialization issues.
  • Operational concerns, performance hotspots, allocations, N+1 queries, thread safety, and enough logging/metrics to support production.

50. Suppose a teammate introduces async code that occasionally hangs under load. How would you investigate it?

I’d treat it like a concurrency bug first, then narrow it with evidence instead of guessing.

  • Reproduce it under load in a lower environment, with timestamps, correlation IDs, and thread pool metrics enabled.
  • Check for classic async issues, .Result, .Wait(), blocking locks around async calls, sync-over-async, and missing ConfigureAwait in library code if context matters.
  • Inspect resource starvation, thread pool exhaustion, connection pool limits, database waits, HTTP client socket exhaustion, and queued work that never completes.
  • Capture dumps or traces during the hang, then inspect task states, blocked threads, wait chains, and hot paths with tools like dotnet-trace, dotnet-dump, or PerfView.
  • Add targeted logging around awaits, retries, cancellations, and timeouts, then verify every async path is awaited and cancellable.

Once I find the pattern, I’d fix it and add a stress test so it cannot regress.

51. What C# language features introduced in recent versions have changed the way you write code, and which ones do you avoid?

A few newer C# features genuinely changed how I code because they reduce noise without hiding intent.

  • record and record struct for immutable models, value semantics are built in.
  • Pattern matching, especially switch expressions, property patterns, and relational patterns, makes branching cleaner and safer.
  • Nullable reference types changed my habits a lot, I think about contracts and null flow upfront now.
  • init setters and required members make object construction much clearer.
  • Global using directives and file-scoped namespaces cut boilerplate in every file.

A few I use carefully or avoid:

  • Overusing primary constructors or very dense pattern matching, readability can drop fast.
  • dynamic, unless interop forces it, because it gives up compile-time safety.
  • Reflection-heavy tricks when a simple generic or interface solves it.
  • async void, except true event handlers, because error handling gets messy.

52. Suppose an API written in C# starts returning inconsistent results after a caching layer was added. How would you approach troubleshooting?

I’d troubleshoot it from the outside in, first proving whether the cache is the source, then narrowing down why data differs.

  • Reproduce the issue with a known input, compare cached vs uncached responses.
  • Check cache keys, bad key design often mixes data between users, tenants, or query variants.
  • Verify TTL, eviction, and invalidation logic, stale data is usually expiration or missed busting.
  • Look for race conditions, especially around concurrent writes, async code, or cache-aside patterns.
  • Inspect serialization, deserialization, and object mutation, shared references can cause weird drift.
  • Add logging and metrics for cache hits, misses, set/remove events, payload version, and timestamps.
  • Bypass the cache in one environment or behind a feature flag to confirm behavior changes.
  • Review consistency expectations, distributed caches can briefly serve old data after updates.

If asked for an example, I’d mention a cache key missing userId, which caused cross-user data leakage and inconsistent responses.

53. If you joined a team maintaining an older .NET Framework application, how would you evaluate modernization opportunities without causing unnecessary risk?

I’d start with a risk-first assessment, not a rewrite mindset. The goal is to find high-value, low-risk improvements, prove them in small steps, and keep the app stable.

  • Map the current state: app type, .NET Framework version, dependencies, deployment model, pain points, and business critical workflows.
  • Measure before changing: error rates, performance, build time, test coverage, security findings, and support burden.
  • Identify blockers: unsupported packages, old third-party libraries, Web Forms/WCF usage, config complexity, tight coupling, missing automation.
  • Look for safe wins: SDK-style projects, package cleanup, CI/CD, logging, monitoring, test harnesses, and security patching.
  • Run a spike: try upgrading one non-critical component to newer .NET or isolating it behind an API.
  • Modernize incrementally: use the strangler pattern, target bounded areas, and ship behind feature flags with rollback plans.

That shows pragmatism, technical judgment, and respect for business risk.

Get Interview Coaching from C# Experts

Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.

Complete your C# interview preparation

Comprehensive support to help you succeed at every stage of your interview journey

Still not convinced? Don't just take our word for it

We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.

Find C# Interview Coaches