Identify and eliminate bottlenecks in your application for optimized performance.
In .NET applications, threading enables the app to execute multiple threads concurrently within the same process. Developers can create threads using the classes provided by the System.Threading namespace.
After creation, all the threads within this process share the same address space, have access to shared resources, and perform their tasks concurrently. To ensure they access these resources in the same manner, the System.Threading namespace provides methods for synchronizing thread activities.
There are numerous benefits to multithreading in .NET applications:
But despite its benefits for throughput and responsiveness, threading can introduce several performance-impacting issues into your application. Resolving these issues requires careful programming.
This hands-on article explores the primary causes of these .NET threading issues and some solutions and best practices to resolve them.
Threading issues such as deadlocks, race conditions, and synchronization errors significantly impact application stability and performance.
Deadlocks cause applications to crash or become unresponsive, while race conditions and synchronization errors can result in unexpected application behavior.
This tutorial discusses each issue in detail, exploring causes and potential consequences before showing you how to fix them.
To follow this tutorial, ensure you have:
Deadlocks occur when two processes are blocked because they’re waiting on each other to release resources, leading to an unresolved wait.
For example, suppose Thread 1 holds a lock on resource X and waits for a lock on resource Y, while Thread 2 holds a lock on resource Y and waits for a lock on resource X. These two threads block each other as they wait for the other thread to release the lock they need. This deadlock causes the application to stall, affecting its usability.
There are several potential causes of deadlocks, including:
Now that you’re familiar with the causes of deadlocks, you can mitigate these issues. This tutorial builds on the following example of an application containing two methods that acquire different locks. MethodA acquires lock1 first and then lock2, while MethodB acquires lock2 first and then lock1. When the two methods acquire a lock, it simulates work with the Thread.sleep() method:
using System;
using System.Threading;
public class DeadlockExample
{
static object lock1 = new object();
static object lock2 = new object();
public void MethodA()
{
lock (lock1)
{
Console.WriteLine("Thread 1: Acquire lock1");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 1: Wait for lock2");
lock (lock2)
{
// Simulate work here
Console.WriteLine("Thread 1: locked 2");
Thread.Sleep(5000);
}
}
}
public void MethodB()
{
lock (lock2)
{
Console.WriteLine("Thread 2: Acquire lock2");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 2: Wait for lock1");
lock (lock1)
{
// Simulate work here
Console.WriteLine("Thread 2: locked 1");
Thread.Sleep(5000);
}
}
}
}
A deadlock occurs when you run MethodA and MethodB in two different threads:
public class Program
{
static void Main()
{
var app = new DeadlockExample();
// Create two threads that call MethodA and MethodB respectively
var thread1 = new Thread(() =>
{
app.MethodA();
});
var thread2 = new Thread(() =>
{
app.MethodB();
});
// Start both threads and wait for them to complete
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
}
}
Thread 1 runs methodA, which acquires lock1 and then blocks while waiting to acquire lock2. Thread 2 runs methodB, which acquires lock2 and then blocks while waiting to acquire lock1. Consequently, both threads block each other while waiting for the other to release the lock, resulting in a deadlock.
To resolve this deadlock, ensure the threads acquire the locks in the same order by reordering locks. For instance, you can modify the DeadlockExample class to always acquire lock1 first before lock2:
public class DeadlockExample
{
static object lock1 = new object();
static object lock2 = new object();
public void MethodA()
{
lock (lock1)
{
Console.WriteLine("Thread 1: Acquire lock1");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 1: Wait for lock2");
lock (lock2)
{
// Simulate work here
Console.WriteLine("Thread 1: Acquire lock2");
Thread.Sleep(5000);
}
}
}
public void MethodB()
{
lock (lock1)
{
Console.WriteLine("Thread 2: Acquire lock1");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 2: Wait for lock2");
lock (lock2)
{
// Simulate work here
Console.WriteLine("Thread 2: Acquire lock2");
Thread.Sleep(5000);
}
}
}
}
Here, methodA and MethodB acquire lock1 before requesting lock2. Working in this order ensures they never wait on each other to release locks, preventing a deadlock.
In addition to reordering locks, you can use lock timeout strategies to detect and resolve deadlocks. In a timeout-based detection, you set a timeout to a value significantly higher than expected in a non-deadlocked situation when acquiring a lock. If the lock surpasses the timeout duration, the function indicates a failure, for instance, by raising an execution or returning false.
To resolve a deadlock using a timeout strategy, use the Monitor.TryEnter method instead of a lock statement. This method attempts to acquire a lock on a resource for a specified time and returns a Boolean value indicating whether it obtained a lock. You can modify MethodA and MethodB functions to detect deadlocks with the Monitor.TryEnter method, as follows:
public class DeadlockExample
{
static object lock1 = new object();
static object lock2 = new object();
public void MethodA()
{lock (lock1)
{
Console.WriteLine("Thread 1: Acquire lock1");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 1: Wait for lock2");
if (Monitor.TryEnter(lock2, TimeSpan.FromSeconds(5)))
{
Console.WriteLine("Thread 1: Acquire lock2");
Monitor.Exit(lock2);
}
else
{
Console.WriteLine("Thread 1: Failed to acquire lock 2");
}
}
}
public void MethodB()
{
lock (lock2)
{
Console.WriteLine("Thread 2: Acquire lock2");
// Simulate work here
Thread.Sleep(100);
Console.WriteLine("Thread 2: Wait for lock1");
if (Monitor.TryEnter(lock1, TimeSpan.FromSeconds(5)))
{
Console.WriteLine("Thread 2: Acquire lock1");
Monitor.Exit(lock1);
}
else
{
Console.WriteLine("Thread 2: failed to acquire lock 1");
}
}
}
}
Here, the Monitor.TryEnter method attempts to acquire a lock on the lock1 for 5 seconds. If it obtains a lock within this time, it executes the code and releases the lock. Otherwise, the function prints an error message and releases the lock to another function that may need it — in this example, the MethodB function.
While a timeout strategy like this can prevent deadlocks, it has some drawbacks. A deadlock may go on longer than it should, and this strategy may kill legitimate transactions prematurely because they appear deadlocked. Additionally, a timeout strategy doesn't guarantee deadlocks won’t occur, though it allows you to detect and resolve them early.
A race condition occurs when two threads have access to the same resource they are trying to modify simultaneously. The thread scheduling algorithm can switch between different threads at any time, so it’s unclear which thread will make the final modification. Therefore, the threads race each other to change the data. Race conditions cause the application to behave unpredictably because, depending on the order the threads execute, the final value of the resource will be different.
Some causes of race conditions are:
To prevent race conditions, ensure shared resources are properly synchronized using primitives such as locks, mutexes, and semaphores.
Here’s an example of a C# code that causes a race condition:
public class Program
{
static int count = 0;
static void Main(string[] args)
{
Thread t1 = new Thread(Increment);
Thread t2 = new Thread(Increment);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Count = " + count);
}
static void Increment()
{
for (int i = 0; i < 1000000; i++)
{
count++;
}
}
}
This code has two threads that increment a shared variable called count. Since the ++ isn’t an atomic operation, the two threads can execute simultaneously, leading to a race condition. It means that one thread can overwrite the other thread’s changes, causing an incorrect result instead of the expected value of 2,000,000.
One of the primitives you can use to prevent race conditions is a mutex. It’s a synchronization primitive that ensures only one thread acquires a lock on a shared resource at any given time. The mutex suspends another thread requesting the lock until the first thread releases the resource.
In the example above, a mutex ensures only one thread can access the shared variable — the count — at a time.
To see how a mutex works, modify the Program class as follows:
public class Program
{
static int count = 0;
static Mutex countMutex = new Mutex();
static void Main(string[] args)
{
Thread t1 = new Thread(Increment);
Thread t2 = new Thread(Increment);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Count = " + count);
}
static void Increment()
{
for (int i = 0; i < 1000000; i++)
{
countMutex.WaitOne();
try
{
count++;
}
finally
{
countMutex.ReleaseMutex();
}
}
}
}
Here, you’re creating a mutex object named countMutex. Then, in the Increment method, you’re using WaitOne to wait for the mutex to become available. Once it’s available, you increment the counter and then release the mutex using the ReleaseMutex method.
Aside from a mutex, you can use a semaphore to prevent race conditions. Semaphores are a type of synchronization primitive that limit the number of threads that can access a shared resource simultaneously.
In the counter example above, only one thread is supposed to increment the counter at a time. Therefore, you must modify the Program class by creating a SemaphoreSlim object named countSemaphore and setting the number of threads to 1. Then, call the Wait method to wait for the semaphore to become available and increment the count once it is. Finally, release the semaphore:
public class Program
{
static int count = 0;
static SemaphoreSlim countSemaphore = new SemaphoreSlim(1);
static void Main(string[] args)
{
Thread t1 = new Thread(Increment);
Thread t2 = new Thread(Increment);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Count = " + count);
}
static void Increment()
{
for (int i = 0; i < 1000000; i++)
{
countSemaphore.Wait();
try
{
count++;
}
finally
{
countSemaphore.Release();
}
}
}
}
Both mutexes and semaphores are useful for synchronizing access to a shared resource. When you need to allow multiple threads to access a resource, use a semaphore instead of a mutex since it allows you to specify the number of threads with access.
For improved performance and lower resource consumption within a single process, use SemaphoreSlim over the Semaphore class.
A monitor is a synchronization primitive that allows you to lock a section of your code that only one thread can access at a time. It prevents race conditions by ensuring no two threads can run in this section. To see how it works, modify the previous example to lock the section that increments the count value:
public class Program
{
static int count = 0;
static object lockObj = new object();
static void Main(string[] args)
{
Thread t1 = new Thread(Increment);
Thread t2 = new Thread(Increment);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Count = " + count);
}
static void Increment()
{
for (int i = 0; i < 1000000; i++)
{
Monitor.Enter(lockObj);
try
{
count++;
}
finally
{
Monitor.Exit(lockObj);
}
}
}
}
In the code above, you're using the Monitor.Enter method found in the Monitor class to acquire a lock before incrementing the count variable. You’re also using the Monitor.Exit method to release the lock after, allowing another thread to take ownership of it. By wrapping this section with a try...finally block, you ensure the thread always releases the lock even if it throws an exception. This action prevents this thread from blocking another thread that wants to modify the count variable.
Compared to mutexes, monitors are more lightweight. However, you should only use them within a single process, as they can only associate with a particular instance.
Lock keywords also help prevent race conditions. By locking sections of the code, you ensure that only one thread can access the code at a time. You can implement a lock keyword using the following Program class so that only one thread can modify the count variable at a time:
public class Program
{
static int count = 0;
static object lockObj = new object();
static void Main(string[] args)
{
Thread t1 = new Thread(Increment);
Thread t2 = new Thread(Increment);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Count = " + count);
}
static void Increment()
{
for (int i = 0; i < 1000000; i++)
{
lock (lockObj)
{
count++;
}
}
}
}
In this Program class, you created a lock object named lockObj using the lock keyword. The Increment method uses the lock keyword to obtain a lock on the lockObj object before incrementing the count variable. The method ensures that only one thread can increment the count at a time, even if multiple threads are running concurrently.
Synchronization is essential to ensure multiple threads can run concurrently without causing conflict or raising errors. It allows threads exclusive access to shared resources, preventing other threads from modifying these resources until they finish.
In C#, there are several synchronization mechanisms to ensure thread safety. So far, this article has discussed locks, semaphores, mutexes, and monitors. You can also use an async/await pattern or the TPL.
Async/await allows you to execute code in a thread without waiting for lengthy transactions to finish. Another thread will execute the lengthy transaction while the main thread executes the next task. This avoids blocking the main thread.
TPL, on the other hand, is a set of public types and APIs that provides a simplified way to handle parallel and asynchronous code. It has classes that make it easy to create and manage parallel tasks.
For example, the Parallel and Parallel.ForEach classes automatically divide the work among multiple threads. TPL also has built-in synchronization mechanisms such as locks and semaphores for thread safety. It also handles cancellation reports for long-running tasks.
Threading can improve the performance and responsiveness of a .NET application. But it can also introduce deadlocks, race conditions, and synchronization errors that negatively impact application performance.
To mitigate these issues, it’s important to follow safe threading practices. Avoid deadlocks by ordering locks correctly. Use synchronization primitives provided by .NET such as mutexes, semaphores, and mutexes to prevent synchronization errors and race conditions. A simpler solution is TPL. It provides a higher-level abstraction of synchronization primitives, allowing you to focus on program logic instead of low-level synchronization issues.
By following these safe threading practices, you can ensure that your multithreaded .NET applications are robust, efficient, and reliable.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now