What Are We Doing Wrong with Logging in C#?

Mabrouk Mahdhi
13 min readOct 8, 2023
Photo by Vitolda Klein on Unsplash

Logging is a bit like the unsung hero of software development. When done right, it silently provides us with a window into the operations of our software, helping us catch issues, debug errors, and understand user behaviors. But what happens when logging isn’t done quite right? It can quickly turn from a useful tool into a maze of confusing data or, worse, a gaping hole where information should be.

The main purpose of logging is not just to record that something happened, but to give us meaningful insights. We should be able to look at a log and answer the questions: What happened? Why did it happen? And, if needed, how can we prevent it from happening again? Therefore, our logs should be meaningful to those reading them. They should be actionable, providing clear information that helps direct next steps. Lastly, they should be efficient — giving us the right amount of information at the right time without hogging system resources.

However, many of us fall into common pitfalls when setting up logging for our C# applications. In this article, we’ll explore these pitfalls, from overlogging to missing out on the benefits of structured logging, and provide guidance on how to log the right way.

Overlogging and Underlogging

Imagine you’re trying to find a specific event in a book. If every page is filled with too much information (overlogging), it becomes tedious to locate what you’re looking for. On the flip side, if the book lacks necessary details (underlogging), you may never find the information you seek.

Overlogging

This is when we record too much information in our logs. While it might seem like a good idea to capture everything “just in case,” this can actually have some unintended consequences.

Issues with Overlogging:

  • Performance: Every log entry consumes resources. This could slow down your application, especially if logs are being written synchronously.
  • Analysis: Sifting through vast amounts of data can become a nightmare, especially when looking for specific issues.

C# Example:
Imagine you have an e-commerce application, and you’re logging every step of a checkout process:

public void Checkout(List<Product> cartItems)
{
foreach(var item in cartItems)
{
logger.Info($"Started processing item: {item.Name}");

var stockStatus = CheckStockAvailability(item);
logger.Info($"Checked stock for {item.Name}. Status: {stockStatus}");

if(stockStatus == StockStatus.Available)
{
var paymentStatus = ProcessPayment(item);
logger.Info($"Processed payment for {item.Name}. Status: {paymentStatus}");

if(paymentStatus == PaymentStatus.Success)
{
var shipmentStatus = InitiateShipment(item);
logger.Info($"Initiated shipment for {item.Name}. Status: {shipmentStatus}");
}
else
{
logger.Info($"Payment failed for {item.Name}. Not initiating shipment.");
}
}
else
{
logger.Info($"Stock not available for {item.Name}. Skipping payment and shipment.");
}

logger.Info($"Finished processing item: {item.Name}");
}
}

=> In the above code, for every item in the cart, there are multiple logging statements. If a user has many items in their cart, the logs will quickly become flooded with repetitive messages, making it harder to spot any meaningful or exceptional events. This is a more complex representation of overlogging. While it’s essential to track the process, doing so for every routine action might be overkill.

Underlogging

The opposite problem occurs when we don’t log enough. We might miss out on vital clues about issues or behaviors.

Issues with Underlogging:

  • Debugging: It becomes challenging to diagnose problems without a clear log trail.
  • Lack of Insight: Without adequate logging, we miss out on understanding system behaviors and user actions.

C# Example:
Imagine in the same e-commerce application’s checkout process, you decide to trim down the logging significantly, but end up not capturing enough information:

public void Checkout(List<Product> cartItems)
{
foreach(var item in cartItems)
{
var stockStatus = CheckStockAvailability(item);

if(stockStatus == StockStatus.Available)
{
var paymentStatus = ProcessPayment(item);

if(paymentStatus == PaymentStatus.Success)
{
InitiateShipment(item);
}
}
}
}

In the above code, several potential problem areas lack any logging:

  1. What happens if the stock isn’t available? There’s no log to tell us.
  2. If the payment fails, we wouldn’t have any information in our logs to debug the reason.
  3. If there’s an issue initiating the shipment, we’re left in the dark.

=> This is a clear case of underlogging. When things go smoothly, we might not see any issues. However, the moment something goes wrong, it becomes challenging to understand and diagnose the problem, since our logs don’t provide enough context or detail.

Finding the Balance

To find the right balance, we should focus on:

  1. Key milestones in our process.
  2. Potential points of failure where things might go wrong.
  3. Providing enough contextual information without redundancy.

Let’s refine our checkout example:

public void Checkout(List<Product> cartItems)
{
logger.Info($"Starting checkout for {cartItems.Count} items.");

foreach(var item in cartItems)
{
var stockStatus = CheckStockAvailability(item);

if(stockStatus != StockStatus.Available)
{
logger.Warn($"Stock not available for {item.Name}. Skipping further processing for this item.");
continue;
}

var paymentStatus = ProcessPayment(item);

if(paymentStatus != PaymentStatus.Success)
{
logger.Error($"Payment failed for {item.Name}. Reason: {paymentStatus.Description}");
continue;
}

var shipmentStatus = InitiateShipment(item);

if(shipmentStatus != ShipmentStatus.Success)
{
logger.Error($"Shipment initiation failed for {item.Name}. Reason: {shipmentStatus.Description}");
}
}

logger.Info($"Checkout process completed for {cartItems.Count} items.");
}

In this balanced version:

  1. We log the start and end of the checkout process, giving us a clear sense of the operation’s scope.
  2. We focus on potential issues, like unavailable stock or payment and shipment problems. These logs are essential for troubleshooting.
  3. We’ve removed repetitive logs that don’t offer significant value, like logging every time we check an item’s stock.

=> By focusing on key events and potential issues, we ensure our logs remain concise while still capturing all the necessary information for monitoring and debugging. This approach helps in maintaining system performance and ensures that our logs are actionable and meaningful.

Logging Sensitive Information

It’s that magical place where sensitive data vanishes from where it’s supposed to be and mysteriously appears in your logs. Don’t you just hate when that happens? 🙄 But all fun aside, let’s dive deep.

Why it’s Risky

You might think that throwing in user data or system responses directly into logs is like inviting a friend over for a surprise party. But here’s the twist: Instead of your friend, you’ve unknowingly invited a hacker for tea! Logging sensitive information can lead to significant security risks, and also place you in hot water with regulators. GDPR, CCPA, or other data protection regulations ringing a bell?

How to Avoid

Now, let’s get to the part where we make sure our logs don’t become an open treasure chest for pirates.

  • Beware of Raw Data: Always, and I mean ALWAYS, be skeptical of logging raw user inputs or responses directly. It’s like sending an invitation to hackers, “Please come and exploit me!
// BAD CODE ALERT! 🚫
logger.LogInformation($"User Password: {password}");

Now, who in their right mind would want to print a password, right? It’s like leaving your house keys under the doormat with a giant sign that says “Here they are!”.

  • Structured Logging with Care: We love structured logging, but like a good movie with a bad sequel, it can be misused. Ensure you’re only logging what’s needed. And again, avoid raw details that shouldn’t be out in the open.
// Still a BAD IDEA! 🚫
logger.LogStructuredData(new { Username = user, Password = password });

See that? Even when structured, the secrets can’t hide! Remember, the aim isn’t to make our logs a diary of secrets but to make them meaningful and actionable.

Not Using Structured Logging

Structured logging is a method where you log information in a more organized or structured format. Instead of just logging plain text messages, structured logging allows you to log in a way that’s easier to read, analyze, and use. Imagine trying to find a needle in a haystack. Now, imagine if that haystack was organized into neat little boxes, and you knew exactly where to look. That’s the kind of advantage structured logging offers.

Why is structured logging beneficial?

  • Easier Analysis: With structured logs, it’s simple to filter, search, and analyze the information. This can save developers a lot of time when trying to understand issues.
  • Better Traceability: Structured logs can be linked with other logs, making it easier to trace a problem from start to finish. This is especially useful in complex systems where a single transaction might span multiple services or components.
  • Integration with Tools: Many log management tools work better with structured logs. They can automatically read the data and offer powerful insights without extra configuration.

How do traditional and structured logging compare?

Traditional String Logging:

logger.Info("User logged in with ID: 12345 at time: 14:30");

Structured Logging using Serilog:

logger.Information("User logged in with {UserId} at {Time}", 12345, "14:30");

In the structured example, “UserId” and “Time” are named properties. This means you can easily search for all logs with a particular UserId or at a specific time.

Ignoring Logging Levels

Logging levels are like the volume control of our application’s logs. They help us tune in to the information that’s most relevant based on the situation. In C#, like many other programming languages, we have various logging levels, such as Debug, Info, Warning, and Error. Each level serves a distinct purpose:

  • Debug: For detailed information primarily useful for developers during the debugging process.
  • Info: General messages about application operations.
  • Warning: Indicates something unexpected happened but the application can still run.
  • Error: Points out when something failed and needs immediate attention.

A common mistake many developers make is not using these levels appropriately. For instance, marking everything as “Info” might mean you’re overlooking crucial details or warnings. On the other hand, abusing the “Error” level for minor issues can lead to unnecessary panic and confusion.

Why is it important to get this right?

  1. Filtering: During a crisis, you can filter logs based on severity. If everything is “Info”, you might miss out on an important error message.
  2. Resource Efficiency: Some logging platforms charge based on the number of log entries. If you log everything as “Error”, not only is it misleading, but it might also be costly.
  3. Effective Communication: Logs are often used by different teams. Using the right logging level helps convey the right message about the state of the application.

Best Practices

  • Be Thoughtful: Before logging, ask yourself, “What level is most appropriate for this message?
  • Regularly Review: As your application evolves, so will its logging needs. Periodically check to ensure logging levels still make sense.
  • Educate the Team: Ensure every team member understands the significance of each level and uses them appropriately.

Lack of Context

When it comes to logging, context is crucial. Imagine you’re reading a book, but all the names, locations, and time references are removed. It would be challenging to understand the story, right? Similarly, logs without adequate context can leave developers scratching their heads.

Logs that simply state “Error occurred” or “Transaction failed” don’t provide enough information. What transaction? Who initiated it? When did it happen? Without answers to these questions, debugging becomes a game of guesswork, wasting precious time.

Here’s how you can add more context:

  • User IDs: Knowing which user experienced the issue can help narrow down potential causes. For instance, it might be linked to specific user settings or data.
  • Transaction IDs: This can be especially handy for financial or database operations. If a transaction fails, having its ID makes it easier to trace back to the original request and see what went wrong.
  • Correlation IDs for distributed systems: When dealing with microservices or other distributed architectures, a single user action might trigger multiple internal requests. By assigning a unique correlation ID to each user action and logging it in all related internal transactions, you can trace the full path of an action, making debugging more straightforward.

In essence, always ask yourself: “If I see this log a month from now, will I understand what it means?” If the answer is no, you probably need more context.

Not Centralizing Logs

When we develop software, especially in today’s world, it’s often spread across multiple systems or services. Think of microservices or distributed architectures — these are setups where different parts of the software could be running in different places.

Now, let’s say something goes wrong. If each part of your software logs errors or messages in its own separate place, can you imagine the headache of tracking down a problem? You’d be like a detective having to visit ten different houses to gather clues for a single case.

This is why centralizing logs is so important. By having all logs in one place, it’s like gathering all those clues into a single room. You can see patterns, identify issues, and get to the bottom of problems much faster.

Ignoring Performance Impacts

When we log, we might sometimes forget that it can affect the performance of our applications. Let’s break it down into simpler terms:

  • Synchronous vs Asynchronous Logging: Imagine you’re writing in a diary. If you write each sentence and then pause for a few seconds after each one, that’s like synchronous logging. It can slow things down because you’re waiting after each log entry. On the other hand, asynchronous logging is like jotting down quick notes and then filling in the details later, without stopping your main work.
  • Potential Bottlenecks: Let’s say you’re sending letters (logs) to a friend (a database). If you send each letter one by one, it can take a lot of time, especially if you have many letters. If you put several letters in one big envelope and send them together, it’s faster. The same goes for logging. If we keep sending log entries one by one to a database without grouping them (buffering) or sending them in batches, it can slow things down.

So, when logging, always think about how it might impact the speed and smoothness of your application. Remember, efficient logging means not just recording information but doing it in a way that doesn’t hinder your app’s performance.

Not Monitoring or Alerting Based on Logs

Logs are more than just records; they are insights into your application’s health and performance. When used effectively, logs can serve as the first line of defense against potential system issues. However, many developers make the mistake of just storing these logs and not actively monitoring them. This approach is like having a treasure trove of information and not using it.

Why Monitoring and Alerting Matter

  • Proactive vs. Reactive Approach: Instead of waiting for a user to report an error or for a system to crash, monitoring and setting up alerts can help you identify potential issues before they escalate. For instance, if you notice a sudden spike in error logs, it can be an indication of a larger problem at hand.
  • Faster Response Time: By setting up alerts based on specific log patterns or thresholds, teams can be notified instantly about critical issues. This allows for faster response and resolution times.
  • Informed Decisions: Continuous monitoring of logs helps in analyzing trends over time. This data-driven approach can aid in making informed decisions about system enhancements or fixes.

How to Set Up Monitoring and Alerts

  1. Define Critical Log Patterns: Understand your application and identify what constitutes abnormal behavior. Is it a sudden surge in error messages? Or a specific type of warning that might lead to system failure?
  2. Choose a Monitoring Tool: There are numerous tools available, both open-source and proprietary, that can help monitor logs in real-time. Integrate them with your logging system.
  3. Set Up Alerts: Based on the patterns you’ve defined, set up alerts. These can be in the form of emails, SMS, or even integrations with team communication tools.
  4. Regularly Review: As with everything, it’s essential to periodically review and refine your monitoring and alerting practices. Maybe a new feature has introduced new log patterns that should be monitored, or perhaps you’re getting too many false alerts and need to tweak the thresholds.

Logs are a powerful tool in a developer’s arsenal, but only if they’re used to their full potential. By actively monitoring logs and setting up alerts, you not only safeguard your application but also make your job easier by catching issues early on. Don’t just collect logs; use them!

Hardcoding Log Messages and Metadata

Logging is all about capturing events or states in your application. But what if we go the path of writing static, hardcoded messages and data? Here’s why that might not be the best approach:

The Problems with Hardcoding

  1. Maintenance Nightmare: Imagine having hundreds of hardcoded log messages spread across your application. Now, what if you want to change the structure or wording of these messages? You’d have to hunt them down one by one.
  2. Localization Issues: For global applications, messages need to be localized (translated) into different languages. Hardcoded English messages can become a stumbling block for localization efforts.
  3. Scalability Concerns: As your application grows, adding new log messages or metadata becomes cumbersome. Consistency can be hard to maintain with hardcoded values.

A Better Way: Templates and Dynamic Metadata

Instead of hardcoding, use message templates. Message templates allow you to define a log message structure and inject dynamic data when needed.

For instance, compare these two approaches in C#:

Hardcoded:

logger.LogInformation("User Mabrouk Mahdhi has logged in at 3 PM.");

Using Templates:

string username = "Mabrouk Mahdhi";
DateTime loginTime = DateTime.Now;

logger.LogInformation("User {Username} has logged in at {LoginTime}.", username, loginTime);

With the second approach:

  • You can easily change the structure of the message without altering the data points.
  • The actual data (username, login time) is separated from the message template, offering better flexibility and readability.
  • It supports localization efforts. The template can be translated, and dynamic data inserted as needed.

Hardcoding may seem like the straightforward path initially, but it poses challenges in maintenance, localization, and scalability. Using templates and dynamic metadata is a more scalable and maintainable approach in the long run. As you log, always think about the future maintainability and adaptability of your application.

Not Testing Logging

Imagine you’re trying to perfect a recipe for a new dish. You’re jotting down every ingredient, every step, and even taking meticulous notes of what not to do. And then when you’re done cooking, you realize… you never tested if your pen was working and the entire recipe page is blank! That’s what not testing your logging is like.

Logging isn’t just a side hustle for your application; it’s its backbone! So why wouldn’t we test it?

Consider this hilarious (and a tad heartbreaking) C# code:

try
{
int divisionResult = 10 / 0;
}
catch(Exception ex)
{
Logger.Log("Oops, something went wrong!");
}

Looks fine, right? But when there’s an error, all we get is “Oops, something went wrong!” Oh no! Did I divide by zero? Did I encounter a null reference? Did the computer decide to take an unexpected nap? Who knows? If our logging failed or wasn’t detailed enough, we’d be in the dark.

Errors can also creep up in places you’d least expect. Maybe your disk runs out of space, or there’s a misconfiguration somewhere. If your logs are your eyes into your application, failing to test them is like trying to navigate through a dense forest blindfolded. Spoiler alert: you’ll bump into a few trees!

Best Practices for Effective Logging in C#

  1. Know Your Audience: Whether it’s a fellow developer, someone from operations, or an auditor, tailor your logs to the folks who’ll be reading them.
  2. Review and Refine: Set aside time periodically to look over and improve your logging practices.
  3. Performance & Security Matter: Logs shouldn’t slow you down, nor should they be a treasure trove of secrets. Be wary.
  4. Choose the Right Tools: C# has some great logging tools like Serilog, NLog, and log4net. Use them, love them, and keep them up to date.

Summary

Logging is not just a rudimentary task but a crucial aspect of building resilient and efficient systems. While it’s tempting to overlook or oversimplify it, we must remember that logs serve as a vital diagnostic tool for developers, operations teams, and even auditors. Overlogging, underlogging, or making any of the other mistakes outlined above can lead to obscured insights, security vulnerabilities, and wasted resources. It’s essential to strike the right balance, ensuring logs are meaningful, actionable, and efficient. Just as with any other aspect of software development, logging practices require periodic reviews and refinements. As developers and engineers, let’s pledge to give logging the attention it deserves, using the tools and best practices tailored for C#. By doing so, we not only enhance our systems but also foster a culture of excellence and proactive monitoring.

--

--

Mabrouk Mahdhi

Microsoft MVP, Author & Senior Consultant @ eBiz Consulting GmbH