Master your next C# interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.
Prepare for your C# interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.
Thousands of mentors available
Flexible program structures
Free trial
Personal chats
1-on-1 calls
97% satisfaction rate
Choose your preferred way to study these interview questions
async and await are compiler features, not magic. The compiler rewrites an async method into a state machine. When it hits await, it checks whether the awaited Task is done. If not, it returns control to the caller, stores the method state, and schedules the continuation to resume later. By default, it also captures the current SynchronizationContext or TaskScheduler, which is why UI apps resume on the UI thread.
Common mistakes I see:
- Blocking on async with .Result or .Wait(), which can deadlock and kills scalability.
- Using async void outside event handlers, because exceptions are hard to observe.
- Forgetting to await a task, creating fire-and-forget bugs.
- Assuming async makes code faster, it mostly improves responsiveness and throughput.
- Not using ConfigureAwait(false) in library code when context capture is unnecessary.
- Starting too many tasks at once and overwhelming I/O, DB, or thread pool resources.
I design for testability by reducing hidden dependencies and making behavior easy to isolate.
IClock, IRepository, ILogger.new inside core logic, they make mocking and control harder.Task and cancellation tokens, so tests can verify behavior cleanly.In practice, if a pricing service needs current time and customer data, I inject both instead of calling DateTime.Now or hitting EF directly. That lets me test pricing rules with simple fakes.
In C#, value types hold the actual data, usually live on the stack or inline, and assignment copies the value. Reference types hold a reference to an object on the heap, and assignment copies the reference, not the object. Value types are great for small, immutable data like DateTime, decimal, or custom structs. Reference types make more sense when you need shared state, inheritance, or larger, more complex objects.
In design, that changes how I model data. I use struct only when the type is small, logically a single value, and should not be null by default. Otherwise I prefer classes, because copying large structs can hurt performance and mutable structs are error-prone. I also think about equality, boxing, and nullability early, because those affect correctness and API usability.
Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.
.NET uses a generational garbage collector. New objects go into Gen 0, survivors move to Gen 1 and Gen 2, and long-lived large objects may go on the Large Object Heap. The GC tracks reachable objects from roots like stack variables, statics, and CPU registers. When memory pressure rises, it pauses managed threads, marks live objects, reclaims unreachable ones, and may compact memory to reduce fragmentation. This is automatic, but it is not instant and it only manages managed memory.
Memory issues still happen in common cases:
- Event subscriptions keep objects alive if you forget to unsubscribe.
- Static fields, caches, and singletons can grow forever.
- IDisposable objects like streams or database connections leak unmanaged resources if not disposed.
- Large allocations, especially strings and byte arrays, can fragment the LOH and spike memory.
- Closures, timers, and background tasks can accidentally capture references longer than intended.
- Native interop, pinned objects, and unsafe code bypass normal GC safety.
Deferred execution means a LINQ query is not run when you define it, it runs when you enumerate it, like with foreach, ToList(), Count(), or First(). That is useful because it avoids unnecessary work, supports composition, and can reflect the latest state of the source collection.
A common bug is accidental multiple enumeration. Example: you build var adults = people.Where(p => ExpensiveCheck(p)); and then call adults.Count() and later adults.ToList(). The filter runs twice, which can hurt performance or repeat side effects. With EF, it can mean multiple database queries. Another issue is source mutation: if the collection changes after the query is defined but before it is enumerated, results may differ from what you expected. If you want a snapshot, materialize once with ToList() or ToArray().
Here’s the practical way to think about it:
class is a reference type, good when you want identity, inheritance, shared mutation, or nullability.struct is a value type, copied by value, best for small, immutable, value-like data such as Point, DateTime, or Guid.record struct is also a value type, but it gives you record-style features like value-based equality, nicer ToString(), and concise syntax.How I choose:
- Use class for entities or objects with lifecycle, identity, or larger mutable state.
- Use struct for tiny data types where copying is cheap and value semantics make sense.
- Use record struct when it is value-like data and you want built-in equality and concise modeling.
- Avoid large or mutable structs, because copies can be expensive and bug-prone.
- If in doubt, default to class, then optimize only when value semantics are clearly the right fit.
The big difference is where the query runs and what can be translated.
IEnumerable<T>, using normal .NET methods and delegates.IQueryable<T>, building an expression tree that gets translated to SQL.Where, Select, joins, and Include affect generated SQL and database round trips.A practical rule, use LINQ to Objects for in-memory collections, and EF LINQ carefully for database queries.
They build on each other, and the main difference is what they promise: just iteration, collection semantics, list semantics, or query translation.
IEnumerable<T>: simplest, forward-only iteration. Use when callers only need to read/loop, and you want the least coupling.ICollection<T>: adds Count, Add, Remove, Clear. Use when you need basic mutable collection behavior but not indexing.IList<T>: adds index access like list[0], Insert, RemoveAt. Use when order and positional access matter.IQueryable<T>: represents a query that can be translated by a provider, like EF to SQL. Use for database queries you want executed remotely.Rule of thumb: expose the narrowest interface you need. For in-memory data, prefer IEnumerable<T> or IReadOnlyList<T> if indexing matters. For EF, keep IQueryable<T> inside the data layer unless the caller truly needs to compose database queries.
Get personalized mentor recommendations based on your goals and experience level
Start matchingBoxing is when a value type like int or struct gets wrapped as an object or interface, which puts it on the heap. Unboxing is the reverse, pulling the value type back out, with an explicit cast.
Why it matters in hot paths:
- Boxing allocates memory on the heap, so it adds GC pressure.
- Unboxing needs a cast and type check, so there is extra CPU work.
- It often happens accidentally, like with non-generic collections such as ArrayList, or APIs typed as object.
- In tight loops or high-throughput code, those small costs add up fast.
- Generics help avoid it, for example List<int> stores int directly without boxing.
Example: assigning int x = 5; object o = x; boxes x. Then int y = (int)o; unboxes it.
They exist to guarantee cleanup of resources, even if an exception happens. In C#, using works with types that implement IDisposable, and await using works with types that implement IAsyncDisposable.
using statement creates a scope, disposal happens at the end of that block.using declaration is shorter, disposal happens at the end of the containing scope.try/finally, calling Dispose() automatically.IDisposable is for synchronous cleanup, IAsyncDisposable is for async cleanup like flushing or network I/O.A practical way to say it in an interview: use using when lifetime should be tightly scoped, use a using declaration when you want cleaner code but still deterministic disposal. Use await using when cleanup itself is asynchronous.
Nullable reference types make nullability explicit, so I treat null as part of the type contract instead of a runtime surprise. It changes both how I write APIs and how I review intent.
string means never null, string? means maybe null.?., ??, ! sparingly, and review ! very critically.Records are reference types, unless you use record struct, built for modeling data. Their big value is value-based equality, concise syntax, and nondestructive mutation with with. Two records with the same property values compare equal, which is different from normal classes, where equality is usually by reference.
I’d choose a record when: - The type represents data, not identity or behavior-heavy objects. - You want immutability or mostly immutable objects. - Value equality matters, like DTOs, messages, configs, or API contracts. - You want cleaner syntax, especially for constructor-style models.
I’d choose a class when identity, lifecycle, or mutable state matters. I’d choose a struct for tiny, value-like types where copy semantics are intentional and allocations matter. For domain entities like Customer, class is usually better. For Address or OrderCreated, record often fits nicely.
I’ve used C# mainly for backend and full-stack work, and I like it because it scales well from simple APIs to larger distributed systems. Most of my experience is with .NET Core and .NET 6+, building services that are clean, testable, and easy to deploy.
One example, I built a multi-tenant API for operational reporting. I handled auth, data access patterns, caching, and performance tuning, then deployed it with CI/CD and containerized it for cloud hosting.
Task, ValueTask, and async void mainly differ in how they represent completion, errors, and allocations.
Task is the default choice. It is awaitable, composable, can be stored, passed around, and exceptions flow correctly.ValueTask is an optimization for hot paths where the result is often available synchronously, avoiding some allocations.ValueTask has tradeoffs, it is harder to use correctly, should usually be awaited only once, and adds complexity.async void should almost never be used, because callers cannot await it and exceptions are harder to observe.async void mainly for event handlers, like UI button click methods.Rule of thumb, return Task for almost everything, Task<T> for async results, ValueTask<T> only after profiling shows it helps, and void only for true event handlers.
They’re closely connected. A delegate is the type-safe function signature, an event is a controlled way to expose delegate-based notifications, and a lambda is just a concise way to create the delegate target.
Func<int, int> or a custom delegate void Notify(string msg).+= and unsubscribe with -=.x => x * 2, often assigned to a delegate or used as an event handler.button.Click += (s, e) => Console.WriteLine("Clicked"); where Click is an event, backed by a delegate type, and the lambda is the handler.Both define contracts, but I choose them for different reasons.
abstract class is for shared state and behavior, a base type with partial implementation, fields, constructors, protected members, and non-public logic.interface is for capabilities, a public contract that unrelated types can implement, and a class can implement many interfaces.Rule of thumb, use an interface when you want flexibility and multiple implementations. Use an abstract class when derived types share core data, lifecycle, or protected implementation details.
Extension methods are static methods that the compiler lets you call like instance methods. You define them in a static class, mark the first parameter with this, and then myString.IsValid() gets compiled as MyExtensions.IsValid(myString). They do not actually modify the type, they just add nicer syntax.
string.IsNullOrWhiteSpace()-style helpers.Generics improve type safety by moving checks to compile time. If you use List<int>, you cannot accidentally add a string, so you avoid invalid casts and a lot of runtime errors. They also make APIs cleaner because the type intent is explicit.
On performance, generics reduce boxing and unboxing for value types. For example, List<int> stores int directly, unlike old non-generic collections like ArrayList, which box values into object. That means less allocation and better speed.
Limitations:
- C# generics are reified, but runtime type behavior still has limits.
- You cannot use arithmetic operators on T without extra constraints or helpers.
- You cannot instantiate T unless you use the new() constraint.
- Some constraints are limited, you cannot express every possible type rule.
- Reflection, variance, and nullable interactions can add complexity.
Variance is about assignment compatibility for related generic types. Covariance means "more derived is okay" for outputs, contravariance means "less derived is okay" for inputs.
out, like IEnumerable<Dog> assigned to IEnumerable<Animal>, because you only read items.in, like Action<Animal> assigned to Action<Dog>, because a handler that accepts any Animal can also accept a Dog.IMessageHandler<in T> let a generic IMessageHandler<BaseMessage> handle OrderCreatedMessage without extra adapters.I treat exceptions as part of the app’s reliability story, not just error cleanup. The goal is, fail fast on programmer mistakes, recover only when the failure is expected, and always leave good diagnostics.
Exception unless you are logging and rethrowing or translating it.throw;, not throw ex;.A solid strategy is global handling plus local intent: middleware or top-level handlers for unhandled exceptions, custom domain exceptions where useful, retries only for transient failures, and user-friendly messages while logs keep the technical detail.
The key difference is stack trace preservation.
throw; rethrows the current exception and keeps the original stack trace.throw ex; throws the same exception object again, but resets the stack trace to the current catch block.catch, use throw; unless you are intentionally wrapping with a new exception.throw new SomeException("extra context", ex);Example: if a repository method fails and you catch it in a service, throw; lets you see the repository line that broke. throw ex; makes it look like the error started in the service catch block, which hides the real source.
Pattern matching makes branching logic more declarative, so the code says what shape of data you expect instead of how to inspect it step by step. That usually means fewer casts, fewer null checks, and less nesting.
is patterns replace manual as plus null checks, so intent is obvious.switch expressions turn long if/else chains into compact, exhaustive rules.<, >, and, or, make validation rules read like business rules.In real codebases, this pays off when models evolve. If a new subtype or state gets added, pattern-based code tends to fail in clearer places, making maintenance safer and faster.
Expression-bodied members are a compact C# syntax for members that can be written as a single expression, using =>. You’ll see them on methods, read-only properties, constructors, finalizers, indexers, and even property accessors in newer C# versions.
public int Count => _items.Count;My rule is simple, if it reads naturally in one line, use it. If I have to mentally unpack it, I switch back to a normal block body.
In .NET, dependency injection usually means you register services in a container, then let the framework supply them where needed, usually through constructor injection. In ASP.NET Core, this is built in via IServiceCollection, with lifetimes like Transient, Scoped, and Singleton.
Program.cs, like IEmailService to EmailService.Scoped for request-based services, Singleton for stateless shared services, and Transient for lightweight, short-lived objects.In .NET dependency injection, the difference is all about how long an instance lives.
Transient: new instance every time it’s requested. Good for lightweight, stateless services.Scoped: one instance per scope, usually per web request. Good for request-specific state, EF Core DbContext, unit-of-work patterns.Singleton: one instance for the whole app lifetime. Good for shared, stateless, thread-safe services, config readers, caches.Wrong lifetime choices cause subtle bugs:
DbContext singleton is a classic bug, thread-safety issues, stale tracking, corrupted behavior.I keep it methodical: stabilize first, increase observability, narrow the gap between prod and local, then test hypotheses safely.
dotnet-trace, dotnet-dump, or SQL profiling, then fix, verify, and add a regression test plus better alerts.I use LINQ a lot for readability, but in hot paths I treat it as a tradeoff, not a default. I’m fine with LINQ when the data size is modest, the query is clear, and profiling shows it is not a bottleneck. I’ve used it effectively for projections, filtering, grouping, and shaping data at boundaries, especially in service layers and reporting code.
Where().Select().ToList() patterns inside per-request or per-item processing.for loop, pre-size collections, and reduce temporary objects.Make the object’s state set once, then never allow it to change.
readonly fields or get-only properties, and set them in the constructor.string, record, or immutable collections.List<T>, make defensive copies on the way in, and often on the way out.record and init properties are common tools, but init is only immutable after construction.Tradeoffs are mostly about performance and ergonomics. Immutability makes code easier to reason about, thread-safe by default, and safer for caching or sharing. The downside is extra allocations, copying costs, and sometimes more verbose update patterns, since changes usually mean creating a new object instead of modifying the existing one.
I usually use mocks at boundaries, not everywhere. In C#, that means repositories, HTTP clients, message buses, clocks, file systems, and anything slow or nondeterministic. For core business logic, I prefer plain unit tests with real objects or simple fakes.
HttpClient, I mock HttpMessageHandler, or use a lightweight test server for higher confidence.The main rule is, always test the Task, not the side effects around it. In C#, your test method should usually be async Task, then await the method under test so exceptions and timing behave correctly.
public async Task TestName(), not async voidAssert.ThrowsAsync<Exception>(() => service.DoWorkAsync())ReturnsAsync(...) or Task.FromResult(...).Result, .Wait(), or .GetAwaiter().GetResult(), they can deadlock and hide real async behaviorThread.Sleep, instead await real signals or use test doublesA common pitfall is writing a test that calls an async method without awaiting it. The test may pass even though the method later throws.
I avoid EF issues by being explicit about query shape, tracking, and what runs in SQL.
Include, or better, project with Select so EF fetches exactly what the API needs.AsNoTracking(), and use AsNoTrackingWithIdentityResolution() if related entities repeat.ToQueryString().Skip and Take, and prefer split queries when one big join causes Cartesian explosion.EF.CompileQuery if profiling shows repeated overhead.In practice, I also enable SQL logging and review query plans early, because EF problems usually show up as bad SQL, not bad C#.
CancellationToken is .NET’s way to stop async or long-running work cooperatively. It does not kill a thread. A caller creates a CancellationTokenSource, passes token down, and callee code checks token.IsCancellationRequested, calls token.ThrowIfCancellationRequested(), or passes the token into cancellable APIs like HttpClient, Task.Delay, EF Core, or streams.
To propagate it properly:
- Accept a CancellationToken in every async method that does I/O or long work.
- Pass it all the way down to dependencies, do not create new sources unless you need timeouts or linked tokens.
- Use CancellationToken.None only when work truly must not be canceled.
- If canceled, let OperationCanceledException bubble, do not wrap it as a generic failure.
- In loops, check the token regularly and stop cleanly, releasing resources in finally.
In ASP.NET Core, usually start with HttpContext.RequestAborted and flow that through your services and repositories.
A deadlock is when two or more threads block each other forever, usually because each holds a lock the other needs. In C#, it also shows up with async code, like calling .Result or .Wait() on a task that needs the current synchronization context to continue.
lock, Monitor, or task waits.await instead of .Result or .Wait().ConcurrentDictionary, channels, or immutable objects.I’d answer this with a quick STAR structure: situation, what was slow, what I changed, and the measurable result.
At one company, a .NET API endpoint was timing out during peak traffic because it made several sequential database calls and did too much mapping in memory.
- I profiled it with Application Insights and SQL query metrics, focusing on p95 latency, timeout rate, DB duration, and CPU.
- I found N+1 queries and unnecessary ToList() materialization.
- I consolidated queries, added the right indexes, switched some paths to async, and introduced response caching for rarely changing data.
- I also reduced payload size and used projections instead of loading full entities.
- Result, p95 dropped from about 2.8 seconds to 700 ms, timeout rate fell by over 80%, and CPU usage on the app nodes decreased around 25%.
These LINQ methods differ in what they expect and how they fail:
First(), returns the first match, throws if there is no match.FirstOrDefault(), returns the first match, or default like null if none.Single(), expects exactly one match, throws if there are zero or more than one.SingleOrDefault(), expects zero or one match, returns default if none, throws if more than one.How I choose:
- Use First when any one match is fine and at least one should exist.
- Use FirstOrDefault when no match is acceptable.
- Use Single when the data must be unique, like lookup by a unique key.
- Use SingleOrDefault when uniqueness is expected, but absence is allowed.
Key idea, Single* enforces uniqueness, First* does not.
In C#, equality has two common forms: reference equality and value equality. For reference types, == usually checks whether two variables point to the same object, unless the operator is overloaded. Equals is for logical equality, and value types often inherit field-by-field equality unless overridden.
Equals, you should also override GetHashCode, because hash-based collections like Dictionary and HashSet rely on both.IEquatable<T> for strongly typed, faster equality checks.Equals reflexive, symmetric, transitive, and consistent, and handle null safely.In modern C#, HashCode.Combine(...) is the usual way to build a good hash code.
== and Equals can mean different things in C#.
== is an operator, its behavior depends on the type and whether the operator is overloaded.Equals is a method, by default on object it checks reference equality, but many types override it for value equality.== usually means same object reference, unless the type overloads it, like string.Equals usually compares values, and == may or may not be available unless defined.Operator overloading changes expectations because a type can make == do value comparison instead of reference comparison. Example, two different instances of a Money class could return true for a == b if amount and currency match. Best practice is to keep ==, Equals, and GetHashCode consistent so collections and comparisons behave predictably.
A few big ones come up a lot in C#:
lock, ConcurrentDictionary, or making data immutable.Interlocked, or by moving the whole operation inside one lock.Dispatcher or SynchronizationContext..Result or .Wait(), avoided by going async end-to-end and using ConfigureAwait(false) in library code..ToList() or using concurrent collections.One concrete case, we had duplicate order processing because two workers read the same status before either updated it. I fixed it by making the transition atomic in the database and adding an app-level lock keyed by order ID.
They help by cutting allocations, reducing GC pressure, and avoiding unnecessary copies, which matters a lot in hot paths like parsing, networking, and serialization.
Span<T> and ReadOnlySpan<T> let you work with slices of arrays, strings, or stack memory without allocating new objects.Memory<T> is the heap-friendly version of Span<T>, useful when data needs to live across async calls or be stored in fields.stackalloc with spans can put small temporary buffers on the stack, which is very fast and avoids GC entirely.ArrayPool<T> reuses large arrays instead of constantly allocating and collecting them, which helps throughput and latency.The tradeoff is complexity. These tools are best when profiling shows allocation or copying is a bottleneck.
They all coordinate access, but at different levels and costs.
lock is syntax sugar over Monitor. Use it for simple in-process mutual exclusion around a short critical section. Best default for most cases.Monitor gives more control than lock, like TryEnter, timeouts, and Wait/Pulse for thread coordination. Use it when you need that extra control.Mutex is heavier and can work across processes via a named mutex. Use it when different processes must share exclusive access.SemaphoreSlim limits concurrency instead of allowing only one thread. Great for throttling, like allowing 10 requests at a time. It also has good async support with WaitAsync.ReaderWriterLockSlim allows many readers or one writer. Use it when reads are frequent, writes are rare, and contention is real.Rule of thumb: start with lock, use SemaphoreSlim for async throttling, ReaderWriterLockSlim for read-heavy workloads, and Mutex only for cross-process scenarios.
Reflection is how C# code can inspect metadata and interact with types at runtime, like discovering properties, methods, generic arguments, or creating objects dynamically. Attributes are declarative metadata you attach to code, then read through reflection to change behavior. Common uses are serializers, ORMs, dependency injection, model validation, plugin loading, and test frameworks.
Tradeoffs: - Flexible and extensible, especially when behavior is driven by metadata. - Slower than direct code, because runtime inspection and invocation cost more. - Less compile-time safety, errors shift to runtime, like misspelled member names. - Harder to debug and maintain, because control flow becomes less obvious. - Can hurt trimming, AOT, and obfuscation scenarios unless carefully handled.
In interviews, I’d say use reflection at boundaries and framework code, not in hot paths or core domain logic.
I treat readability as the first constraint, then use modern C# where it clearly improves signal, safety, or boilerplate. Idiomatic code is great, but if half the team has to stop and decode it, the value drops.
var when the type is clear, and records for simple data models.Source generators move work from runtime to compile time. Reflection is flexible and easy for discovery, but it costs startup time, allocations, trimming issues, and can be harder for AOT scenarios. Generators produce normal C# ahead of time, so you get better performance, stronger typing, and earlier errors in the build.
Where they fit: - High-volume serialization, DI wiring, routing, logging, mappers, and validation. - Native AOT, Blazor WebAssembly, mobile, and microservices where startup matters. - Framework or platform code, where conventions are known at compile time. - Cases where you want analyzers plus generated code for a better developer experience.
I would not replace reflection everywhere. If the shape is truly dynamic, plugin loading, unknown assemblies, user-defined types at runtime, reflection still wins. My rule is, use generators when metadata is knowable at build time and performance or AOT compatibility matters.
They each organize code at a different level, and together they keep a C# app maintainable and safe.
.dll or .exe, they package compiled code, metadata, and resources.MyApp.Services.public, private, internal, and protected, so you expose only what other code should use.internal is a great tool for keeping your public API small while still organizing code into useful internal types. In testing, InternalsVisibleTo lets a test assembly access those internals, so you can validate important logic without making everything public.
internal to hide implementation details and reduce coupling.InternalsVisibleTo("MyProject.Tests") when internals contain meaningful domain logic worth testing directly.I usually treat InternalsVisibleTo as a pragmatic compromise, helpful, but also a signal to review whether the code should be refactored into a separate component with a clearer public surface.
I’d answer this with a quick STAR structure: situation, task, actions, result, then focus on how I reduced risk before making bigger design changes.
At one job, I inherited a legacy .NET app with huge service classes, static helpers, and almost no tests. The first thing I changed was not business logic, it was safety and visibility. - I added characterization tests around the most critical workflows, so I could refactor without breaking hidden behavior. - Next, I isolated external dependencies, like file I/O and database calls, behind interfaces. - Then I broke apart one high pain class at a time, usually by extracting smaller services with single responsibilities. - I also removed duplicated logic and replaced magic strings and flags with enums and clear models. - Result, we cut production bugs and made new feature work much faster.
I’d answer this with a quick STAR structure, Situation, Task, Action, Result, and keep the focus on how I investigated, not just the fix.
At one job, we had a C# API that would randomly create duplicate orders under load, but we couldn’t reproduce it locally. I started by tightening logging around the request flow, correlation IDs, and database writes, then compared successful vs duplicate cases. That showed two concurrent requests were passing the same validation before either transaction committed. I reproduced it with a load test, confirmed a race condition, and fixed it with an idempotency key plus a database unique constraint. After deployment, duplicates dropped to zero, and we kept the extra telemetry so similar concurrency issues would be easier to trace.
I handle that by separating preference from risk: clarify the goal, bring evidence, propose a small experiment, then commit once the team decides.
On one C# API project, the team wanted to put most business logic directly in controllers to move faster. I disagreed because it would make testing and reuse harder. I did not make it personal, I wrote up two options, a controller-heavy approach and a service-layer approach, and compared them on testability, change impact, and delivery time. Then I built a thin vertical slice both ways and showed that the service approach added very little overhead but gave us cleaner unit tests and easier maintenance. The team adopted the service layer. In cases where my view is not chosen, I still align fully and help make the agreed approach successful.
I review PRs in layers: first correctness, then readability, then maintainability, then performance and safety. I also try to understand the intent from the ticket or description before I comment, so I do not nitpick something that was a deliberate tradeoff.
IDisposable usage, await instead of blocking calls, LINQ that is clear and not accidentally expensive, and nullable reference warnings.I’d treat it like a concurrency bug first, then narrow it with evidence instead of guessing.
.Result, .Wait(), blocking locks around async calls, sync-over-async, and missing ConfigureAwait in library code if context matters.Once I find the pattern, I’d fix it and add a stress test so it cannot regress.
A few newer C# features genuinely changed how I code because they reduce noise without hiding intent.
record and record struct for immutable models, value semantics are built in.switch expressions, property patterns, and relational patterns, makes branching cleaner and safer.init setters and required members make object construction much clearer.A few I use carefully or avoid:
dynamic, unless interop forces it, because it gives up compile-time safety.async void, except true event handlers, because error handling gets messy.I’d troubleshoot it from the outside in, first proving whether the cache is the source, then narrowing down why data differs.
If asked for an example, I’d mention a cache key missing userId, which caused cross-user data leakage and inconsistent responses.
I’d start with a risk-first assessment, not a rewrite mindset. The goal is to find high-value, low-risk improvements, prove them in small steps, and keep the app stable.
That shows pragmatism, technical judgment, and respect for business risk.
Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.
Comprehensive support to help you succeed at every stage of your interview journey
We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.
Find C# Interview Coaches