Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Software Logging Schemes?

kdawson posted about 6 years ago | from the ripsaw-or-chainsaw dept.

Programming 225

MySkippy writes "I've been a software engineer for just over 10 years, and I've seen a lot of different styles of logging in the applications I've worked on. Some were extremely verbose — about 1 logging line for every 2 lines of code. Others were very lacking, with maybe 1 line in 200 devoted to logging. I personally find that writing debug and informational messages about every 2 to 5 lines works well for debugging an issue, but can become cumbersome when reading through a log for analysis. I like to write warning messages when thresholds or limits are being approached — these tend to be infrequent. I log errors whenever I catch one (but I've never put a 'fatal' message in my code, because if it's truly a fatal error I probably didn't catch it). Recently I came across log4j and log4net and have begun using them both. That brings me to my question: how do the coders on Slashdot handle logging in their code?"

cancel ×


Sorry! There are no comments related to the filter you selected.

As little as practically possible (0)

the eric conspiracy (20178) | about 6 years ago | (#24630419)

Log the starting conditions so you can reconstruct data. Otherwise don't do much logging because it will hurt application performance, sometimes drastically.

Re:As little as practically possible (5, Insightful)

0123456 (636235) | about 6 years ago | (#24630471)

"Otherwise don't do much logging because it will hurt application performance, sometimes drastically."

You're assuming that performance -- or, more precisely, CPU usage -- is important; in many cases, reliability (and being able to track down bugs after a crash) are far more important than CPU usage. With quad-core CPUs so cheap these days, we can easily afford to spend another thousand dollars to throw more processing power into a system which has cost a couple of hundred thousand dollars of programmer time to develop and will cost thousands of dollars an hour for any downtime.

not cpu bound... disk bound (5, Insightful)

JonTurner (178845) | about 6 years ago | (#24630523)

It's not CPU that's at a premium, it's disk IO. And on virtualized machines (such as is extremely popular in corporations and hosting farms) where there might be four different OSs running on the same physical hardware, disk becomes a scarce resource very, very quickly. And not only does your virtualized server go to shit, it takes the others down with it since they can't get timely disk access, either.

Re:not cpu bound... disk bound (1)

0123456 (636235) | about 6 years ago | (#24630699)

"It's not CPU that's at a premium, it's disk IO."

True, but again, if it's an important system you can buy a dedicated server or a second disk for logging for the cost of a few hours (possibly a few minutes) of downtime.

Re:not cpu bound... disk bound (4, Informative)

Heembo (916647) | about 6 years ago | (#24630899)

For high availability clustered web applications, it's not disc IO that is the problem, but network overhead.

I tend to use log4j and asynchronous logging that passes log messages to a syslog server that can handle the file io - and it ends up being network overhead that is the killer.

Re:not cpu bound... disk bound (3, Insightful)

hackstraw (262471) | about 6 years ago | (#24631401)

I tend to use log4j and asynchronous logging that passes log messages to a syslog server that can handle the file io - and it ends up being network overhead that is the killer.

People have said disk io, CPUs, and they say they are both cheap. NICs are VERY cheap.

Re:not cpu bound... disk bound (2, Interesting)

morgan_greywolf (835522) | about 6 years ago | (#24631593)

Companies with virtualized machines are often also using storage area networking and related high-availability technologies. The traditional bottleneck associated with disk I/O does not happen nearly as badly.

Re:As little as practically possible (4, Insightful)

dslauson (914147) | about 6 years ago | (#24630933)

"You're assuming that performance -- or, more precisely, CPU usage -- is important; in many cases, reliability (and being able to track down bugs after a crash) are far more important than CPU usage."

I work on a real-time embedded medical device, where both performance and reliability are vital. We've got constrained resources, and the system must be extremely responsive.

Our logging scheme is pretty cool. It's written so that two computers can log to a single hard drive, and each logging statement must define a log level. So, for example, if I'm writing GUI code, I can log to log_level_gui_info, log_level_gui_debug, log_level_gui_error, or any of a number of more specific log levels.

The idea is

  1. Some of these log levels we can turn off before a production release.
  2. We have a special tool for reading these logs (they're encrypted), and in this tool you can check off which log levels you care to see, and which you don't

So, we have two ways to filter out extraneous logging that we don't care about (one actually keeps the logging from happening, and one just filters it out during analysis), and we can log as freely as we like as long as we're smart about which levels we're using.

As much faith as we all have in our own code, nothing's as frustrating as trying to analyze a log that came in from the field where there's just no information about what went wrong.

Re:As little as practically possible (1, Insightful)

theshowmecanuck (703852) | about 6 years ago | (#24631033)

You're assuming that performance -- or, more precisely, CPU usage -- is important; in many cases, reliability (and being able to track down bugs after a crash) are far more important than CPU usage.

I wonder why, given the huge increase in the performance of computers over the last ten years and more, why it sill takes some games one to five minutes to load... one reason I am more likely to play spider solitaire now when I just want to play something for 10 or 15 minutes. Anyway... it is this kind of attitude causing it (e.g. "Screw it... we have good CPUs and lots of memory... who gives a shit if I don't feel like considering performance"). If people would program things as efficiently now as they did in years past, performance tuning analysts would be out of work, the enterprise systems I see being build now wouldn't be sucked out performance dogs out of the gate, games would be more fun, etc. etc. etc. etc.

ok... I feel better now...

Re:As little as practically possible (2, Informative)

linear a (584575) | about 6 years ago | (#24631271)

Afraid you're describing a very natural behavior that's unlikely to disappear. Developers (not just software) tend to work until each constraint is just met and then stop to work on the next constraint. E.g., get the load time down to the maximum acceptable time, then stop working on it.

taking it a big further - no logging at all (3, Insightful)

JonTurner (178845) | about 6 years ago | (#24630485)

Don't coddle weak programmers... it's survival of the fittest out here. Either they learn to nourish themselves from the ample teat of a stack dump, or they must perish. It is for the good of our civilization. I know this seems harsh, young Jedi, but it is the way of the Elders of Assembler, from the ancient Time Before OO. Now go Forth and code.

Okay, joking aside. Parent has a great point -- logging can generate incredible volumes of text and can form a remarkable bottleneck, especially on VM systems where your OS may not be the only one hitting the disk.
So take advantage of Log4J/Net's ability to log at different severity levels and make logging globally configurable so you can enable/disable entirely at runtime. I'd recommend you log the following : object creation, scarce resource allocation, recoverable failure/error conditions and unrecoverable failures. Preface each severity level with a unique label so you can grep for it later. Even at the most verbose level, you can then grep your output to see only what's of interest to you (e.g. "unrecoverable:...").

Re:As little as practically possible (2, Interesting)

liquidpele (663430) | about 6 years ago | (#24630759)

You just made every technical support person in the world want to hurt you. Sure, don't log all that by default, but there better damn well be a way to turn on all the verbose logging when needed dammit.

Re:As little as practically possible (1)

the eric conspiracy (20178) | about 6 years ago | (#24631165)

Turning on verbose logging doesn't help you after the process has gone tits up. It's ok if you are debugging, but really if you are debugging you want to use a debugger, not a log.

As far as hardware being cheap etc. as other posters have posited, I'd sure like to know where you work that a Sun E10000 and an EMC Symmetrix are considered cheap.

The fact is that if you are running apps on big iron hardware is not cheap, and the economic effectiveness of an application can well depend on how many transactions it can process in a given period of time.

Yes, it can be tough to figure out what went wrong with a sparse log. But that's the way it is, sorry.

On the other hand I do agree with people who state disk space is usually not a factor. There are ways to manage that issue.

As much as practically possible (5, Insightful)

Anonymous Coward | about 6 years ago | (#24631047)

On the other hand, broken code hurts application performance, sometimes drastically.

I'm an SQA engineer with years of experience working with large scale enterprise systems. Generally speaking the cost of unexpected outages or data corruption outweighs the cost of hardware. In such systems the costs of deployment activity itself can be such that you'd rather pay for more hardware to support extremely verbose logging.

Sure, boneheaded logging can cause unnecessary performance hits, the obvious example being logging in a loop when logging at entry and exit would have sufficed. But that's not what we are really talking about here. You posited that you should do as little logging as practically possible, and I believe that you are wrong.

Log lots and log often. Just do so intelligently. Use a logging framework (log4j, log4net, log4perl etc) and set the priority appropriately. Only use ERROR for real errors (unexpected code paths or data), use WARN when a performance metric is hitting a soft limit (to warn you before you hit that hard limit), and use DEBUG to verbosely log anything else of general interest. Rarely you might also want to log in an extremely verbose manner data that wouldn't ordinarily be interesting, and this should be logged at a TRACE level. Generally speaking if this is the case then the code itself is due for a refactor. FATAL should normally be reserved for errors that prevent correct startup - generally if an application runs correctly at startup then any potential faults that you see and handle now become ERRORs because there is nearly always something better an application can do than log FATAL and exit. As the OP observed, if you have a potential fault that kills your application and you don't see and handle it then you don't have the opportunity to log FATAL anyway.

By using a logging framework, many logging pitfalls can be avoided because the framework itself provides well tested facilities. eg, time-stamping, log rotation, file-handle management etc. In addition, using a framework allows the operator to tune the logging on a very granular level. This allows for a trade off to be made where if a performance impact is noted in a well used class then its logging can be reduced at runtime. Sure, there is still a small performance impact because the logging framework has to do a "if (logMessage.logLevel >= loggingClass.logLevel ) then {...}" comparision, but in the scale of things that impact is tiny.

My profession is not about finding and fixing bugs. It is about understanding the software that is being delivered and deployed. It is about understanding what defects exist (or may exist) and the possible implications of those defects. It is about reducing the risk of defects through analysis. Analysis of the software's functionality, analysis of the software's performance, analysis of the processes used to produce the software itself. You will never be yelled at for releasing software with a well understood and documented defect, but the shit will hit the fan when you release major defects that are not understood.

Logging is an _invaluable_ tool in this analysis. You'd be a fool to not use it effectively.

Jakarta Commons Logging (1, Informative)

Anonymous Coward | about 6 years ago | (#24630427)

I use Jakarta Commons Logging...

Re:Jakarta Commons Logging (1)

I confirm I'm not a (720413) | about 6 years ago | (#24631141)

I use SLF4J [] (as a wrapper around Log4J, usually), and consider Commons Logging deprecated. This is a blog [] post from the author of Commons Logging:

I'll come right out and admit it: commons-logging, at least in its initial form, was my fault...If you're building an application server, don't use commons-logging. If you're building a moderately large framework, don't use commons-logging. If however, like the Jakarta Commons project, you're building a tiny little component that you intend for other developers to embed in their applications and frameworks, and you believe that logging information might be useful to those clients, and you can't be sure what logging framework they're going to want to use, then commons-logging might be useful to you.

Most of the time I'm *not* building a Commons-style component, so JCL's dependencies hinder more than help. SLF4J, however, is very light-weight and very useful. One feature I like is a built in String.format function that won't evaluate if logging is disabled.

What do you want to achieve... (5, Informative)

AaronLawrence (600990) | about 6 years ago | (#24630435)

As usual, "it depends" on what you are trying to achieve. Nobody can give you a blanket recommendation. But I guess in general: the log files need to give you enough information, that can't be got in other ways, to solve any problem that comes up.

We have a realtime product that goes all over the world and talks to hardware that we can't always get access to ourselves. Therefore, we sometimes must debug our code remotely. Obviously, logging is critical to this. We keep sometimes hundreds of MB of logs and have archiving rules and a tool for users to collect them. Every layer of the system keeps it's own logs, and all logs have timestamps to milliseconds.

In our case we log all the data back and forth, and then every important decision the code makes. For example if it decides there is something wrong with incoming data, it must log that. Any action it decides to take must be logged. Any data that will be passed on to other layers/the outside world must be shown. Generally, whenever we forget to log some of this data we will later regret it ("why the hell is it ignoring that device state..."). We also log at startup, basically the whole system configuration so that we can reproduce it.

Callstacks when there is an exception can be very useful. However, a lot of "errors" (at least in our case) are not exceptions but rather unexpected data or behaviour. We rarely have a crash and in state-based systems a callstack doesn't tell you much about what's going on. So a callstack is not useful for all situations.

Other times, you just want logging to give you a clue where in the code it was so you can run up the debugger and step through it (you do know how to step through code in the debugger, right?). In that case, too much logging can just get in the way. It might be sufficient in a GUI or web app to say which screen/page and which button was clicked.

You'd hope users could report this kind of details, but not always: if the user is working in another language, in another country, with two layers of helpdesk between you and them, and they are busy doing other things when the problem occurs and only call in the issue an hour later, and the helpdesk takes a week to report it to you - you may just get a vague or even misleading report that no-one can remember when you ask questions. In those cases log files may be all you have to go on.

There is also a tradeoff between log detail and manageability. Besides the difficulty in reading long log files, having a lot of detail means maintaining a lot of extra code. It also means that log files can become unmanagably large. In our case those hundreds of MB of logs can be a huge problem for customers to send to us because they have low quality internet connections (small companies in Mexico for example).

Re:What do you want to achieve... (5, Informative)

ericfitz (59316) | about 6 years ago | (#24630709)

+1 for parent.

If you want good logging, then define requirements for it, just as you would for any other feature of the program. You also need to define the audience for the log. The comments thread has focused on debug logging for developers (Linus "no debuggers" Torvalds would be proud) but there are a number of reasons why the users who are stuck^h^h^h blessed with your software might want logging. For instance:

- audit trails (often required by organizational security requirements or regulatory requirements)
- accounting/billing (you'd be amazed at the odd ways people come up with to bill for things)
- health monitoring (the admin might not want to watch your program 24x7 to see if it is running; they might want to program automation to be alerted when it is not working properly)
- troubleshooting (believe it or not, your software might actually break when running in the wild)

Anyway, think about your use cases, and then think about what to instrument for each use case, and what to put in the events.

For instance, if you want to make your daemon monitorable for health, then think about all its dependencies. Does it read config from a file? The file is a dependency. What happens if a value is invalid? Does it fail or use a default? If it fails, reading the value is a dependency. Need a network socket? Dependency. Connection to remote machine? Dependency (actually multiple- name resolution, network connectivity, authentication, app-level connectivity, etc.). After you've enumerated all your dependencies, then add instrumentation in your code to log events when the dependency is unsatisfied (==unhealthy/broken), and when it is satisfied (==healthy). Make sure to log BOTH states, so that the monitoring app can decide which state you are in. Make sure to log only once per state transition. In each event, try to put as much information about the situation as you can- why you are in the state ("the value foo from daemon-config was invalid"), status codes, etc.- give your user a fighting chance of being able to use your log to diagnose and resolve the issue.

If you want to instrument for audit, then I suggest reading the Orange Book or the Common Criteria documents for suggestions on what needs to be audited and what information to put in the events.

For accounting, examine the RADIUS RFCs.

Hope this helps.

Re:What do you want to achieve... (1)

linear a (584575) | about 6 years ago | (#24631367)

Amount of logging to do totally depends on the context. Regarding logging for regulatory/legal/sundry requirements, treat that as "user requirements" or, better, necessary features. Probably should decouple one's thinking of logging for those requirements from logging for development/improvement/troubleshooting purposes.

Three levels (2, Interesting)

spaceyhackerlady (462530) | about 6 years ago | (#24630443)

I find I usually end up with 3 levels of logging:

Normal operation, often with some notion of "Yes, I'm still running even though I haven't done anything else lately".

Details. Usually corresponding to processing steps.

Algorithm tracing. This includes things like logging SQL queries. This is usually only of interest to me.


I never really thought about it. (1)

Ant P. (974313) | about 6 years ago | (#24630445)

After a few greps over the PHP code I'm working on at the moment, it turns out there's... no obvious log pattern at all. It's logging to the webserver error log in some places, logging to syslog in others, and using assert() randomly. Maybe I should've paid more attention to that bit while writing it.

Standard format for domain information (4, Informative)

MichaelSmith (789609) | about 6 years ago | (#24630453)

I work on a large air traffic control system. Logging is a huge issue. Log files are collected centrally by a separate application. One important issue IMO is making the contents of your various log files meaningful to people who are not familiar with them.

If your system has objects of type A B and C which can be handled by different components of your system then your should make the logging system in those components print information about those objects in exactly the same way.

While you are at it, make the log format easily parsable by software. You don't want to be looking for a needle in a gigabyte size haystack of trace information without help from a tool which understands what it is looking for.

Re:Standard format for domain information (1)

TooMuchToDo (882796) | about 6 years ago | (#24631025)

Enjoy all the fun of ADS-B =) As an IT professional and a private pilot, I hope if you're working on a project related to that, it works flawlessly.

Attention Slash-Dot and Internet in General: (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#24630457)

Ladies and gentlemen, I am horny.

Re:Attention Slash-Dot and Internet in General: (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#24630623)

Evfer since I wuz a little boy, my momma told me that I was the topic. Now I am much older but some things never change: I am the topic, nothing human is untopicable of me. I hereby lodge my formal protest against this unjustificiable moderitationism blatantly smeared upon my (above) postage.

Re:Attention Slash-Dot and Internet in General: (1)

spazdor (902907) | about 6 years ago | (#24631163)

Hey you.

Let's have sex.

ONLY IF... (2, Funny)

Jane Q. Public (1010737) | about 6 years ago | (#24631607)

... you are of the opposite sex, have no significant communicable diseases, and pay your own way.

Log levels (0)

Anonymous Coward | about 6 years ago | (#24630461)

Use different log levels for events, e.g debug, info, warning, error, fatal ...
This way the user can set a log level threshold above which messages will be ignored, so you don't have to plough through log files full of debug data, for example.

It varies (2, Interesting)

pjwhite (18503) | about 6 years ago | (#24630469)

I will add lots of logging to debug a specific problem and then rip it out when the problem is fixed. Permanent logging includes run time problems like serial communication errors, file not found, etc. I like to make various logging functions switchable, so user input can be logged for example, but only when needed. Once a program is running well, it should only log data for dire exceptions, unless regular accounting logs are needed.

Re:It varies (1)

Zadaz (950521) | about 6 years ago | (#24630789)

Yeah, it does really vary. I admit to doing this myself, though when I'm coding with others they despise me for crufting up the code. (Though coding with others often leads to conditions where logs are needed more often anyway...)

During development and testing I log all non looping function calls, object creation/destroy and memory management, though I cull it as development proceeds (or just switch it all off).

This is overkill but I got in the habit when maintaining someone else's code on a Project of Madness. At worst it helps me track down quickly where to start the debugger.

Virtually always log all of my IO, keyboard, network, etc during development. There are some excellent tools out there (particularly used in game development) that can record and play back all IO to get you back to a bug.

Don't overdo it, especially of you're logging over a network. I once worked on a net-based video player (think YouTube with video editing). The CEO took it upon himself late one night (honsetly) to add a lot more logging, not just errors, but status of everything, how long things took to load, including creepy ad tracking-type info like what buttons were most popular, etc, etc. Almost immediately the servers which were made to handle tons of bandwidth were getting hit hard by the clients. The logs where hilarious to watch as the clients started reporting "sever taking too long to respond, retrying" errors which quickly escalated into a DOS attack. I moved on from the company soon after, they are now defunct.

It depends on the project, environment, and your style and skill where the sweet spot it.

Logging to a database (3, Informative)

Animats (122034) | about 6 years ago | (#24630481)

My online applications log to a database, not a text file. Multiple applications on different machines can log to the same database table. There's no need for "log rotation"; old entries can be summarized and purged by date on the live database. With appropriate indexed fields, you can find key log entries in huge log files very rapidly.

Even program faults are logged to the database. If the program crashes, the top-level exception handler catches the event, does a traceback, opens a fresh database connection, and logs the traceback.

Re:Logging to a database (1)

ericfitz (59316) | about 6 years ago | (#24630553)

Logging to a database is generally a poor strategy. I see this over and over and cringe every time.

Databases generally increase the overhead of logging significantly, and they don't add significant value. Sure, you can "select * from ... where ...". But do you REALLY need this? Most of the time when you need something from the log you can just grep /error/ or something comparable.

Databases are great for reporting but are just unnecessary overhead for logging.

Re:Logging to a database (2, Interesting)

plierhead (570797) | about 6 years ago | (#24630631)

IMO database logging has good points and bad points: On the good side, its easy to manipulate (query, purge, transform, summarise) the log entries. Also you can access the log entries remotely using the database tools you already know. On the bad side, its undoubtedly slower and more resource-intensive. Also, unless you have multiple DB connections (which itself raises complexity and overhead), then committing a log entry to the database will also commit your unit of work. It seems to work well for "user logging", i.e. where the end user of your application (rather than just the dev team) would want to read the messages.

Re:Logging to a database (3, Insightful)

AaronLawrence (600990) | about 6 years ago | (#24630655)

Not to mention the added complexity and failure modes. All but the most trivial databases can go wrong in interesting ways, and when that happens where will you put your logging? It's precisely when things go wrong that you need logging the most. So you want the least possible dependencies. Right now, that's appending text to a file - file systems are simpler and tested more thoroughly than even the best databases can be.

Like the user (or the system, or the virus...) shutdown the database server in the middle of operation. How do you prove that after the fact if the logs were going into the database?

Re:Logging to a database (1)

marxmarv (30295) | about 6 years ago | (#24630807)

What if your web server and programming language are forbidden from creating or writing files? (I am easily persuaded that this is a good idea from a system security point of view if you're a hosting provider.)

Re:Logging to a database (1)

FishWithAHammer (957772) | about 6 years ago | (#24630849)

It's not a benefit to system security; just run the script engine (PHP, whatever) as the user who owns the account.

Re:Logging to a database (1)

CastrTroy (595695) | about 6 years ago | (#24630941)

What kind of hosting provider are you talking about? Every hosting provider from the $4/month plans all the way up let you write to files in your own personal directory. Most that I've seen give you SSH access to your directory so you can do whatever you want. Come to think of it, I've never seen a hosting environment in any situation where you could write server side code, and couldn't write to a file on that server.

Re:Logging to a database (1)

fishbowl (7759) | about 6 years ago | (#24631569)

>What if your web server and programming language are forbidden from creating or writing files?

The time-tested solution in the UNIX world is log message over sockets to syslog daemons (on dedicated hosts, if the situation dictates).

Re:Logging to a database (1)

bpkiwi (1190575) | about 6 years ago | (#24630825)

Logging to a disk file might work when your application is running on a single box, but when you have something that runs on a pool of a hundred servers distributed across three different data centres, and you get a bug report that customer x "was using the service just after lunch yesterday and it did something funny", you're going to have fun trying to find the log file.

Re:Logging to a database (1)

AaronLawrence (600990) | about 6 years ago | (#24631107)

Yep, large-scale, multi-user/multi-instance server applications have to do something smarter. I don't have any direct experience but it's obvious the file only approach is not very managable.

Still the general advice applies: treat logging as any other development task. What are the requirements and constraints? In such a server environment you will arrive at different answers than for a single user desktop application or an embedded controller.

Re:Logging to a database (0)

Anonymous Coward | about 6 years ago | (#24630929)

The worse problem of database logging is that it can't log anything while the database is offline or unavailable.

Re:Logging to a database (1)

linear a (584575) | about 6 years ago | (#24631421)

Not necessarily. The worst problems I've seen with database logging were when the database becomes "uncleanly" unavailable and the application doesn't recognize what is happening. Ugh.

linux? (-1, Troll)

Anonymous Coward | about 6 years ago | (#24630483)

it's still for fags.

Not all logging is useful ... (1)

Eric Damron (553630) | about 6 years ago | (#24630489)

I set a "logging level" in my configuration file so that when I need a program log to be verbose I can make it so but for normal use I can keep the logs fairly light.

A log is only useful if it records information that is used. The problem is that sometimes everything is running fine and super verbose logging is a waste but then something in the environment changes and suddenly temporarily logging a lot of debugging information is useful. Setting a "logging level" in a configuration file lets me do light logging, verbose logging and just about anything between without having to recompile my projects.

whatever you do, don't use nfs (4, Interesting)

Sir_Real (179104) | about 6 years ago | (#24630491)

If you're using log4j, don't use multiple hosts to write to the same nfs filesystem file. You'll run into blocking issues and log4j doesn't handle them correctly. The nirvana of clustered app logging is an async JMS queue. You fire off the message and forget it. You don't wait for file handles.

Re:whatever you do, don't use nfs (1)

TooMuchToDo (882796) | about 6 years ago | (#24631049)

Holy shiat. I just thought of a kick ass way to use the messaging queue system Amazon's cloud computing provides.

I'm off to prototype some code. Thanks!

Depends on the application you're writing (2, Interesting)

lteo_calyptix (1346023) | about 6 years ago | (#24630531)

It depends on the app you're writing -- is it a web app, a database app, a userspace program written in C, a Perl/Ruby script, or.. At work we created our own logging library in C to emit log events for different levels, e.g. informational, debugging, warnings, errors, fatal messages. We then have wrappers around that library so that languages like Ruby can access that logging library. But on hindsight I think I would've just used syslog if I had to start over. :)

Production or Dev Environment? (1)

Comatose51 (687974) | about 6 years ago | (#24630535)

Are we talking about production environment or a development environment? In production, we log our exceptions or when major changes happen, such as something being deleted. The goal there is to help our support team and customers nail down the problems when they arise. I don't know if LoC is the right measure for these things. Our UI (it's a web app) has little to no logging since errors just appear when the UI has a problem. So we can have hundreds of lines of code with no logging there. Our backend logs much more frequently. Basically every "catch" block has a call to the logger (log4net is what we use as well). Still, most of us prefer to hook a debugger up to the process during development and rarely ever go back through the logs. The only time the logs are frequently used prior to production is when we're running automated QA and we want to know what happened when a test case fails. In that case, when we compile with the debug flag, we log quite a bit and more frequently and log things that only our developers would understand.

I guess maybe the rule of thumb is, depending on the environment, generate logs that are targeted at the audience when things go wrong. I'm not convinced frequency of logs is a meaningful measure of appropriate logging.

youre doing it wrong (1)

Tennguin (553870) | about 6 years ago | (#24630555)

"I personally find that writing debug and informational messages about every 2 to 5 lines works well for debugging an issue" Learn to use a debugger. What you are doing is backwards. The logger is meant to help out when the application you are working on is being executed remotly and attaching a debugger isnt practical or desirable. There is no code in the world that I can think of that needs a log line after every two steps in a procedure.

Re:youre doing it wrong (2, Interesting)

wdsci (1204512) | about 6 years ago | (#24630745)

Agreed, logging every 2 to 5 lines gives you the kind of information that you should really be getting with a debugger. Of course, when you're trying to diagnose a specific problem, sometimes it can be easier to put log messages every line or two than to repeatedly step through the code with a debugger, but that's sort of the same thing, just a temporary debugging aid - most of that logging output should be removed once you've figured out what's going on. For general use, I think about one log call per function might be reasonable - more if it's a long function, or none if it's a short function that does something really simple. And even most of those should probably be disabled once you release the software.

Re:youre doing it wrong (0)

Anonymous Coward | about 6 years ago | (#24630937)

Or... It is a fault that happens every oh 10000 cycles of the system or so... at unpredictable times, in a way that should not be possible without some outside factor.

Then 'excessive' logging can be quite useful. Special cases require special and often technically "stupid" solutions sometimes.. ;)

An example:

An RFID based system I worked on would sometimes get garbled data back from an external embedded device. This data would check out through CRC etc but the actual data was wrong. This happened with a few rfid-cards out of some tens of thousands.
Logging what is received and sent to the embedded device for every read was hugely useful in tracking down the issue.
Turns out there was a bug in a conversion routine on the embedded device that only surfaced with a specific pattern in the SN on the rfid-tag.. slipped through testing somehow.. not my project ;)

The logging here generated about 10 lines per read which was a good amount of data with several thousand reads a day but it was a hell of a lot better than what management suggested... That I sit watching the system screen and wait for it to bomb :-p

Re:youre doing it wrong (1)

Simon80 (874052) | about 6 years ago | (#24631337)

In that case, you were logging in response to a specific problem, and it wouldn't have taking a log line every 2-5 lines of other code in order to achieve what you wanted. If code needs to log that often, it's probably lacking in proper use of abstraction.

Re:youre doing it wrong (2, Interesting)

ciggieposeur (715798) | about 6 years ago | (#24631145)

There is no code in the world that I can think of that needs a log line after every two steps in a procedure.

Any code in which timeouts can affect the result would require this kind of logging, which includes networking code or code that handshakes between multiple threads/processors. Example: debugging something like a new x/y/zmodem implementation is nigh impossible within a debugger because your side must respond within 10 seconds or the other side will start acting differently.

I Don't! (4, Funny)

Vectronic (1221470) | about 6 years ago | (#24630563)

I don't, save the rain forest, hug a tree, prevent deforestation, stop logging now!

Re:I Don't! (0)

Shados (741919) | about 6 years ago | (#24630987)

Text files fortunately do not use paper.

Re:I Don't! (2)

robfoo (579920) | about 6 years ago | (#24631627)


I let the kernel do it for me (2, Funny)

ILongForDarkness (1134931) | about 6 years ago | (#24630573)

Segmentation fault: core dumped

Logging, parts I and II (2, Informative)

trapezoid (258439) | about 6 years ago | (#24630661)

I wrote up a two-part piece on logging best and worst practices a while back. See Part I [] and Part II [] if you are interested.

TOO MUCH! (2, Interesting)

imp (7585) | about 6 years ago | (#24630675)

1 line of logging per 200 lines of code is way too much. 2 in 5 lines is absolutely insane. I've seen way too many systems where the logging made it totally unusable. People just don't realize the costs of logging everything.

There's absolutely no way to document anything this verbose. Without documentation, the logging is useless.

Re:TOO MUCH! (1)

ciggieposeur (715798) | about 6 years ago | (#24631181)

People just don't realize the costs of logging everything.

Are you operating on embedded systems? Because anything faster than about 500MHz can easily do a few million "if (logging_level >= LOGGING_LEVEL_CONSTANT) { ... log_something ... }" type checks per second.

I handle mine (1)

TornCityVenz (1123185) | about 6 years ago | (#24630751)

with guru meditation errors of course!

Logging is a form of debugging (0)

Anonymous Coward | about 6 years ago | (#24630783)

Debugging is for people who write buggy code. Nuff said.

Try putting logging calls in the vital parts (1)

Zapotek (1032314) | about 6 years ago | (#24630819)

I personally put logging calls in the most used functions.
For example in my MySQL handler class I have put the logging code into the function that executes the SQL queries, which is the function that everything else relies on.
So when I need to debug I have a stack of SQL queries along with their error codes and error strings.
That's enough to figure out what went wrong.

My point is put the logging calls in the vital functions, the ones that do all the work. This will save you many code lines and processing cycles.

Re:Try putting logging calls in the vital parts (1)

Firehed (942385) | about 6 years ago | (#24631023)

That comes at the expense of a LOT of usually-unnecessary disk/network activity, as you're calling the thing ALL THE DAMN TIME. Just imagine if Facebook or Yahoo logged every query executed. Maybe it helps during the early debugging stages of smaller apps/sites, but your logs would get out of control faster than you can imagine on larger utilities... they'd need some sort of gigantic SAN just for the log file.

Or did you mean that you only log failed queries? That would make a hell of a lot more sense.

Filters (2, Interesting)

zarthrag (650912) | about 6 years ago | (#24630863)

My logging is setup so I can quickly filter down to the type of data I want. It's more than just "information", "warning", and "error" - but by (cpp)file, module, etc. That way, if an issue arises, I can eliminate the cruft and see just what I need. Just takes planning.

Syslog (1)

Pr0Hak (2504) | about 6 years ago | (#24630925)

Syslog, of course.

Whatever is useful while programming. (2, Informative)

Restil (31903) | about 6 years ago | (#24630951)

I tend to do my debugging by inserting a lot of printf statements to indicate where in the program I currently am and the value of any critical variables at that time. As the output information is no longer needed (i.e. I fixed the bug it was attached to), I tend to cull out whatever isn't useful anymore. However, I tend to keep starting messages in function calls related to a routine I'm working on or making more than a trivial change to... since chances are, knowing me, I'm going to end up putting them back in there anyway once I create a new bug... and lets face it, it WILL happen.

Once I'm done, I go back, remove or comment out (usually just comment out) all the messages that have no redeeming value for a properly functioning program, and turn the rest into debug statements which print based on the debug level provided at execution time... or sometimes I use a mask to select which types of messages to display.


Re:Whatever is useful while programming. (1)

Shados (741919) | about 6 years ago | (#24630971)

I tend to do my debugging by inserting a lot of printf statements to indicate where in the program I currently am and the value of any critical variables at that time.

Wouldn't using know... -debugger- be more efficient at doing that? Breakpoints, watching variables, etc?

Re:Whatever is useful while programming. (2, Insightful)

CastrTroy (595695) | about 6 years ago | (#24631007)

Disappointingly enough, this is one of the things that isn't covered very well in a lot of courses. I didn't get any exposure to debuggers in any classes I took throughout university. I learned about it myself. Same goes for a lot of other useful tools like source control systems. While I learned a lot while taking my degree, very little of what I learned dealt directly with the process of how you actually sit down and write code. Seriously, some people think that printf really is the best/only way to debug, and I can see why. My first Java course had us all typing up code in notepad and compiling/running from the command line. After that, courses just told us to use Java, without pointing to any specific tools that we should be using. It was so bad, that first year Java actually used a special add on library to do input output using a GUI, so when it came time do not use that in second year, we had to go figure out how to do IO all over again.

Re:Whatever is useful while programming. (1)

Zadaz (950521) | about 6 years ago | (#24631051)

Wouldn't using know... -debugger- be more efficient at doing that? Breakpoints, watching variables, etc?

All you get on some embedded systems is serial output. No watch variables or breakpoints, just whatever text you can manage to dump out the serial port.

In addition logging function calls can quickly narrow down -where- to put your breakpoint and what variables to watch.

Re:Whatever is useful while programming. (1)

T3Tech (1306739) | about 6 years ago | (#24631577)

While I could use gdb in a client/server fashion for the embedded stuff I do (linux based), I find it much simpler to just add debug level printf statements. Then, telnet in and run the software in debug output mode in the foreground to at least get an idea of where in the code the problem is.

Of course, as stated in a sibling post even this isn't an option on some embedded systems.

Log4j supports selective logging (1)

coyote-san (38515) | about 6 years ago | (#24630963)

Log4j supports selective logging. That means you can have info/debug/trace priority messages in place, but never see them in the log unless you explicitly enable extra logging for that class or package. You can do this at runtime, e.g., via something like 'chainsaw' (which attaches to a running process) or hooks in your UI.

Our policy is that logs are usually very quiet. Application startup/shutdown and not much more. But if there's a problem the debugging messages are already in place to let us peak into the system, even if it's been deployed to a production site.

BTW AOP is also great for this. You can configure logging interceptors that log activity when you're in a development environmnent, but easily removed in production. This is a natural approach when going from one layer to the next, e.g., when wrapping the DAO layer.

Lotsa logging (3, Informative)

SpinyNorman (33776) | about 6 years ago | (#24630983)

I write code for Telecom test systems that need to run 24x7 processing highly varying requests from dozens of different client systems. Our system consists of dozens of different processes/components per host, with multiple hosts all invoking components on each other as needed (all via CORBA). There are very many paths that any request can take through our system.

In this environment we log VERY heavily since each request is close to unique and we need to be able to determine the path it took through the system, and why it did, and what happened, in the event of any bug report. Some of the haviest used modules can produce close to 1GB of log per day per host - upto a couple hundred lines of logging information per request per process that it passes thru. We use a custom printf-like log library written in C++ (that auto rotates the log files based on various criteria), a custom tail utility for dealing with the large log files (tail a log file from a given timestamp - done instantly via binary search on the timestamps) and a daily cron job to compress the older log files and move any older than 5 days off the production servers to someplace with more storage.

Forget logging: use an IDE, exceptions and asserts (1)

dstates (629350) | about 6 years ago | (#24630989)

First, a well functioning app should really not be generating any log file except what the application itself needs (e.g. usage logs for a web server).

Second, instead of wading through reams of log output, use exceptions and assert statements so that you only generate log output when something has gone wrong.

Finally, use your IDE. Instead of trying to infer state by combing through logs, set break points where you catch an exception. This lets you traceback to see why you ended up with the exception without suffering much if any of a performance hit.

As we move to increasingly distributed systems, log files become less useful. For many distributed apps, you may not even have access to all of the various file systems where a log might be written. Best solution is to rely on highly modular and thoroughly debugged code.

Re:Forget logging: use an IDE, exceptions and asse (1)

carlzum (832868) | about 6 years ago | (#24631219)

I would argue that distributed systems make logging more important. An IDE is great when you can reproduce the error in development, but it's not always possible. A log can answer questions like "which node is exhibiting the problem?", "what actions did the user perform?", "what did the message I sent look like?". The ability to enable verbose logging is a lifesaver when troubleshooting complex, distributed software bugs.

One line of logging for every line of comments (3, Funny)

sprior (249994) | about 6 years ago | (#24630991)

That should be about right...

Logging or debugging? (1)

serviscope_minor (664417) | about 6 years ago | (#24630993)

Sometimes with printf style debugging, so much data comes out that it is not practical to use the logging in normal code. That said, I dump all data I want to stdout, and use AWK to massage the results. If you put in some kind of tags or markers, this can help simplify the log processing code significantly.

You don't need databases or anything structured, simply one stream per process. The process will run sequentially, so a sequential storage structure (ie a file) is the best matched to the data. Also, I avoid markups like XML and so on in favour of plain text, since it is easier to process the plain text with standard UNIX tools. They also tend to operate more efficiently since parsing is a matter of looking for whitespace or \n. Also, the same techniques work from any reasonable language, since they can all output text.

Um. Write to memory often, disk seldom and filter? (0)

Anonymous Coward | about 6 years ago | (#24631027)

You have to be logging a *lot* to dog down a system if you're writing to memory and dumping out to disk every 100000 lines or so (or on exit). As for viewing, how hard is it to filter written output in place?

Did I miss something?

Bleh (0)

Anonymous Coward | about 6 years ago | (#24631041)

Logging is for the weak, all my shit is rock solid, no need for logging.

I use God (1)

b1gb1rd (1334679) | about 6 years ago | (#24631045)

I keep my fingers crossed and pray to God, asking him to tell me where the error in my code is. It works 100% of the time. (You deserve this response for answering such a silly question, get back to work!)

Re:I use God (2)

T3Tech (1306739) | about 6 years ago | (#24631641)

I can see the t-shirts on ThinkGeek (which shares a corporate overlord with Slashdot) now...

God is my debugger.

Reminds me of my uni days... (1)

Cynic.AU (1205120) | about 6 years ago | (#24631061)

Wow. This is a disturbing reminder of my University assignments, for which I forgot to remove those cout "No segfault yet!\n"; statements.

Truly, logging is an amateur pursuit, & a grownup Software Engineer utilises formal verification and.... *GASP*.... unit testing!!

Seriously, do the slashdot developers call a logging function somewhere with "Okay, we've started the comment routine!" in

Embedded debugging (2)

shadoelord (163710) | about 6 years ago | (#24631089)

I work on set top boxes, and not every platform we port to has a good debugger (hell, its been years since I've seen a good debugger). Our logging system is all in house; multiple "levels" for each log statment, (noise,information,warnings,fatals,etc), with each module creating its own log id and setting its "preference level". It works well, but:

1) Useless logs.
Engineers not taking the time to write logs that are useful. "Got to here", "Value=1", etc. A few of us write enum-to-string functions and pass them to the logging system for cleaner output.
2) Running at the speed of 115200.
We've only got a serial port most times, with multiple threads trying to access it, there's got to be some synchronization, and this generally affects threads of any priority. Using a logger that caches and outputs logs at its own pace is nice.

Re:Embedded debugging (2, Informative)

Perf (14203) | about 6 years ago | (#24631375)

Engineers not taking the time to write logs that are useful. "Got to here", "Value=1", etc. A few of us write enum-to-string functions and pass them to the logging system for cleaner output.

An Engineer does something that stupid?!!!
Who told them they were engineers? The HR Dork?

log it, don't forget the person who supports it (1)

baydat (803771) | about 6 years ago | (#24631097)

interesting topic, i have been working with a new installation of a portfolio accounting system. The developers don't talk to their support personel, who can't refer the end user (me) to how to crank up the logs to find issues and thus you find core crap laying around daily or extra process that they tell you just to kill and move on. think about your end result, software out there in the wild that is supportable, not mysterious to the end user. nice product, if it was free, unfortunetly it's ubuer expensive and i feel, is a piece of...

just don't reinvent the wheel (1)

VoidEngineer (633446) | about 6 years ago | (#24631111)

Having spent many years as a Systems Administrator, I would argue that the most important part of logging is to make sure that it is in a format and location that other people can use. People won't use the logs if they don't know where they are. And, if you're developing for Windows, I would go further and say that the only place you should be considering logging data to is the EventVwr application. I'm not sure how the java stuff goes, but if you're developing in .NET, ditch the log4net application, and stick with the System.Diagnostic classes.

People have spent a lot of time and effort building logging systems and installing systems to monitor those logging systems. Don't try to reinvent that particular wheel.

I asked slashdot a very similar question (2, Interesting)

emmjayell (780191) | about 6 years ago | (#24631115)

First - let me give you my own perspective. I recommend having each subsystem log in such a fashion that you can easily grep to include or ignore that subsystem. For example for one package the following LVLx messages were the first four characters as follows:

LVL1 - basic startup and shutdown info ( a few lines per run)
LVL2 - Interactions with the database
LVL3 - Interactions with the file system
LVL4 - Detailed database interactions including each sql statement
LVL5 - amazingly verbose debug information including memory and variable allocations

In almost all cases, I recommend being able to set each level on or off. Your sysadmin (maybe yourself) will appreciate that ability.

If appropriate, I recommend an 'audit' record after each completed or aborted transaction. EG - after every order or every user change or whatever is important for accountability / business activity monitoring purposes.
This is the original question [] .

A few things that've worked for me... (0)

Anonymous Coward | about 6 years ago | (#24631137)

Like other posters said, there's no one size fits all. But, I've pulled together a few tools and practices that seem to be pretty adaptable...

Be sure to have all your logging go through one routine--don't sprinkle your code with printf's. There's more logic than you might think that you might need to add to logging: sending the log to different places, changing it for debug/release builds, and the all important consolidating reputed messages into a "Previous message repeats 29302 times."

Once you do that, define two macros (yeah, I'm a C programmer), one that vanishes in release builds and one that doesn't.

One trick that's worked well is to define a bitmask with one bit for each of the different things your program does. Then, add this as the first parameter to all your log calls. At runtime, set the mask to the things you're interested in, and have your log routine only print/record/send ones that match. So, if I'm trying to troubleshoot, say, something dropping messages at an end-user site, I can have them set the runtime flag to NETWORK_RECEIVE|BUFFER_MANAGEMENT, and only relevant log messages show up. This is really useful, because it lets you put lots and lots of trace in your code but keeps log files manageable. (Of course, the same log message might be relevant to a few different things. This is handled by or'ing together flags in the log call.)

If it's code that's going to run for a long time, or pretty much if you're leaving any trace in on a shipping version, be sure to put some logic in to keep an eye on the size of the log file. Sure, everyone's got lots of disk these days, but you still feel pretty dumb when a production machine crashes after a year because the log file got to 100's of MB...

Use dtrace. (0)

Anonymous Coward | about 6 years ago | (#24631147)

With OpenSolaris it is no longer necessary to do the actual logging yourself. Through a combination of truss (which can do system calls as well as library calls) and dtrace, the actual mechanism of logging should not be one the programmer needs to worry about.

With dtrace, the application only needs to define points of interest in the flow of the program and export the relevant data. The collection (or logging) of data is done when it is required, so there is no performance impact when it is not in use.

Inside the kernel, we have "fbt" (function-boundary-tracepoint) for every function - modulo optimization.

An example of what an application can do is MySQL. See here for more details:

This doesn't take care of every problem where logging is useful, but it does take care of a lot of them.

Re:Use dtrace. (1)

Zan Lynx (87672) | about 6 years ago | (#24631253)

I was going to recommend this also. Dynamic code patching is a very powerful tool and can remove the need for thousands of explicit "if(debug) log;" calls.

Virtual machines like the JVM can do this too.

Anti-pattern (1)

istartedi (132515) | about 6 years ago | (#24631193)

I've seen at least one example where it looked like the programmer added logging when trying to debug a problem. Then, apparently on the assumption that it was "useful once, so it might be useful again", the logging was left in. There was a compile-time AND a runtime switch to toggle this logging. It tended to be ON all the time, cluttering the logs with needless information, and making the code look ugly. In theory, compile-time switches can eliminate the performance issue; but some of the logs depended on values that had to be computed before-hand. The result? That code had to be left in, even if the log wasn't there. It would sometimes generate a compiler warning when compiling without logging, since those variables were initialized but otherwise un-used. In some cases, macros were used to dodge around this.

The rememdy to this anti-pattern? When debugging a specific issue, I prefixed the log with the word DIAGNOSTIC in all caps, just like that. Thus, I knew that such logs should be removed as soon as the specific bug was fixed. A grep for DIAGNOSTIC would locate all these messages. The remaining logs were gradually removed, and things that were supposed to be logged under normal operation were written into the program specifications. These could be toggled at runtime via the verbose options, an option that needs to be distinguished from logs generated specificly for programmers working on specific issues.

Code to debug a specific issue should never, I repeat, NEVER find its way into a release. Many will argue that it's a waste to throw away the code that instruments the bug; but I've found this worry to be unfounded (I'm reminded of another internet rant about "how to properly delete code", and I know I've been guilty of leaving old code around, so I'm not claiming purity on any of these issues).

If a problem is likely to occur often in code that does not have bugs (e.g., a client that can't connect to a server because the server is down) then of course that should be logged, otherwise you aren't going to know why the client has no data. That just cycles back to what I said before, namely that you should only log what's specified in the requirements for logging.

I happily use Perl (1)

sigzero (914876) | about 6 years ago | (#24631341)

So I either use Log::Dispatch or Log4perl (a Perl port of Log4j). Both are great.

log4j, syslog (1)

rickla (641376) | about 6 years ago | (#24631369)

Use log4j, and use the levels you can set. It depends on what you need, but I use info for audit trail kind of stuff. And for performance put conditionals around logging for debug and other cpu hits. You can write to syslog using log4j too, for a nice central backup. I don't know where all this chatter about vm's and disk being a bottleneck come from, I have never seen that, it's just sequential writing and I do gigs of it, no issue. But if it is, redirect to a server made to store it.

Log::Log4perl rocks (1)

talexb (223672) | about 6 years ago | (#24631469)

The CPAN module Log::Log4perl [] is a great tool for logging -- it means you can stick in plenty of debug statements, and dial them up for debugging, then dial them down in Production.

This module uses several message levels; in descending order of importance they are FATAL, ERROR, WARN, INFO, DEBUG and TRACE. It's possible to log messages to files, a screen, or even to an E-Mail message.

The real strength of this method is that you don't need a 'debug' version of the code -- all configuration is done externally, which means you can turn logging on for a Production problem, run your test, and turn it off again, and look at the log files off-line.

use keys not levels. (0)

Anonymous Coward | about 6 years ago | (#24631527)

I use a custom logger, that doesn't log via levels, but rather by text keys that are loaded from a config file at runtime. Logging code looks like: log( "dbg", "blah blah" );
log( "err", "some problem" );
log( "file load", "file not found" );
This way I can log as often as I like while keeping log entries only to the information I'm interested in simply by making changes to the log config file.

Log the meaningful, email the important (1)

Jane Q. Public (1010737) | about 6 years ago | (#24631567)

(1) Log lots of information when you are debugging. Then, when you are done debugging that particular problem, TAKE THAT LOGGING CODE OUT.
If you find it tedious to find your debug log lines while debugging, it is a sign that you are logging too much everyday stuff! In the meantime, surround your debug log lines with multiple lines of "===================" or the like. Then, of course, take those out too.

(2) On an everyday basis, log only UNEXPECTED conditions. After all: if it was expected, then your code handled it, right? That means you have to plan for unexpected things to occur. You should be doing that anyway.

(3) If it is a very important issue, you should have an emailer set up, not just a logger. That way, you will be notified of urgent things come up.

(4) You should be doing tests on your code, not just logging. Logs are for when problems occur. Prevent them from occurring in the first place by writing tests. If you are not familiar with that practice, look up "test driven development" on the internet. You will find more than you want to know.

Bunion Software's Blue Ox Lumberjack Logging (0, Offtopic)

c0d3r (156687) | about 6 years ago | (#24631613)

Checkout Bunion's Blue Ox Lumberjack Logging Solutions for industrial strength Logging functionality. []

operational errors (2, Informative)

Spazmania (174582) | about 6 years ago | (#24631675)

1. Distinguish between serious -operational- issues and other issues. The sysadmin doesn't need to know that you had a pointer problem; there's nothing he can do about it. He does, however, want to know that a message was received and the appropriate action taken. Or that a connection was attempted but failed.

2. Be grep freindly. The first log entry related to a particular activity should have an ID of some sort in the log message which is then included in every additional log entry associated with that activity.

Tagged based logging (1)

NightStorm (300433) | about 6 years ago | (#24631695)

I used a system where log messages were assigned arbitrary tags which were themselves strings. The communications subsystem got the "COM" tag, the memory manager got the "MM" tag, etc. Log levels were just tags ("ERROR", "FATAL", etc). The system supported 32 output channels, each channel was associated with some output stream (file, pipe, etc) and each had a Boolean expression which filtered what went to the channel ("FATAL" or "ERROR" or "ALERT" and "MM" or "COM"). A channel could also be assigned a "lifetime" so that output to the channel only lasted for some period of time. Each channel could be setup as the daemon was running so it was easy to start debug messages going to a particular console, memory manager messages to a file, alerts to a pager app., etc. It was great when remote accessing a running system and diagnosing a client's issue. When you are all done you just shutdown the channel or let the channel expire. End of output and no more wasted I/O.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>