Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Secure Syslog Replacement Proposed

Unknown Lamer posted more than 2 years ago | from the why-all-the-hate dept.

Software 248

LinuxScribe writes with this bit from IT World: "In an effort to foil crackers' attempts to cover their tracks by altering text-based syslogs, and improve the syslog process as a whole, developers Lennart Poettering and Kay Sievers are proposing a new tool called The Journal. Using key/value pairs in a binary format, The Journal is already stirring up a lot of objections." Log entries are "cryptographically hashed along with the hash of the previous entry in the file" resulting in a verifiable chain of entries. This is being done as an extension to systemd (git branch). The design doesn't just make logging more secure, but introduces a number of overdue improvements to the logging process. It's even compatible with the standard syslog interface allowing it to either coexist with or replace the usual syslog daemon with minimal disruption.

cancel ×

248 comments

I don't know... (4, Informative)

ksd1337 (1029386) | more than 2 years ago | (#38152154)

Text is damn convenient to use. How are you gonna grep a binary file?

Re:I don't know... (0)

Anonymous Coward | more than 2 years ago | (#38152170)

binary2text file | grep

Re:I don't know... (5, Informative)

Anonymous Coward | more than 2 years ago | (#38152194)

journalgrep -e "Nov 0[1234]-[0-9][0-9]-2011" | less

Re:I don't know... (5, Insightful)

iluvcapra (782887) | more than 2 years ago | (#38152252)

Witness the deeply-ingrained UNIX Philosophy thing where if you can't use grep(1), it naturally follows that the thing is impossible to search.

You can't grep a Berkeley DB, yet for some reason you can find stuff in it, too.

Re:I don't know... (4, Funny)

Anonymous Coward | more than 2 years ago | (#38152488)

You can't grep a Berkeley DB, yet for some reason you can find stuff in it, too.

strings berkeley.db | grep "data"

Enjoy,

Re:I don't know... (5, Insightful)

quanticle (843097) | more than 2 years ago | (#38152498)

The problem isn't searching in the ordinary case. The problem is searching in the failure case. I can grep a truncated, mangled text file. If I truncate and mangle your BerkeleyDB can you still search it?

Re:I don't know... (0)

Zero__Kelvin (151819) | more than 2 years ago | (#38152852)

"The problem is searching in the failure case. I can grep a truncated, mangled text file. If I truncate and mangle your BerkeleyDB can you still search it?"

The integrity of a mangled text file is no greater than the integrity of a mangled binary file. And, yes I can: strings my.db | grep "data"

Re:I don't know... (3, Insightful)

msclrhd (1211086) | more than 2 years ago | (#38153950)

With binary data, you have potential issues with the binary parser (e.g. has a hacker corrupted the log to trigger a buffer exploit in the binary-to-text program). Also, binary data is open to endian issues and integer/pointer size issues. Not to mention versioning (trying to read logs written using a different version of journald that write an incompatible journal file). Likewise if you only have access to fragments of the log.

Re:I don't know... (5, Insightful)

TheRaven64 (641858) | more than 2 years ago | (#38152536)

Grep is just one example. Grep lets me search any text file. Tail -f lets me watch anything that's added to it. Wc -l lets me enumerate the entries in it. Awk lets me extract elements from it. There are lots of other standard UNIX utilities for manipulating text files. If you are replacing a text file with a binary file then you need to provide equivalent functionality to all of these. If this file is one that is important for system recovery, then you need to put all of these in the root partition, without significantly increasing its size. These are not insurmountable problems, but they are problems.

Re:I don't know... (2)

Superken7 (893292) | more than 2 years ago | (#38153000)

This is the first thing I thought too. But if I'm going to have many piped commands, why not add one more that cats the thing in text format? I can't think of a reason to why that would be inconvenient. My rotated syslog is gzipped and I can just zcat it, or even cat | gunzip - | whatever it. So the slight inconvenience *might* well be outweighed by the new benefits.

Re:I don't know... (2)

Unknown Lamer (78415) | more than 2 years ago | (#38153102)

My argument in that case is that grep is insufficiently expressive for the modern world.

The UNIX "everything is a file, and those files are all ^J delimited records" is a hack for a world when record based file systems were seen as overcomplicated (because, well, MULTICS style files were overcomplicated). It's time to move up [gnu.org] .

Re:I don't know... (5, Insightful)

hedwards (940851) | more than 2 years ago | (#38153410)

I disagree, the fact that such a model still works so well decades later is definitely evidence that they were doing something right. When it comes down to it, if you make everything a file then you don't have to worry about envisioning niche uses as most of them can be accomplished by chaining together several commands. The ones that don't are still not impossible as you can just throw together a Perl script or similar to manage them.

jobs. (5, Funny)

mevets (322601) | more than 2 years ago | (#38153984)

Attitudes like yours cost the industry jobs. It is best for if we store data away into increasingly inappropriate places so that lusers have to pay us to get their own data.

Hell, going back to standard data formats and reusable tools would be the death of a thousand increasingly bizarre specialty languages alone.

As a penance, you should rewrite diff in python to work on sqlite databases. That should set the industry back another few years.

Re:I don't know... (1)

Unknown Lamer (78415) | more than 2 years ago | (#38154152)

A counterargument would be that if we had a minimally structured container format that everyone used you could save a lot of time. Untyped binary blobs work as a lowest common denominator format, but they can be a huge pain to work with. Personally I wish we'd move to a more structured model for data (the relational people have a point) -- I find it far more pleasant to e.g. write an SQL query than to hack together a script to munge data from five different file formats to get the output I need.

But, of course, libc provides untyped blob I/O and so that's what everyone uses. It's kind of hard to break from 40 years of code no matter the benefits.

Re:I don't know... (5, Interesting)

pclminion (145572) | more than 2 years ago | (#38153278)

If you are replacing a text file with a binary file then you need to provide equivalent functionality to all of these.

No, I just need to provide a bin2txt program. The UNIX philosophy, I think you missed it. It's based on simple, self-contained, modular components, not some "everything is just text!" fantasy.

Re:I don't know... (1)

TheRaven64 (641858) | more than 2 years ago | (#38153600)

You also need to provide a txt2bin program. This is what subversion does, for example.

Re:I don't know... (2)

madbavarian (1316065) | more than 2 years ago | (#38153142)

Not being able to grep the logs would suck. It would break every hack script I have for checking things in the logs.

Furthermore, I'm not sure what problem the binary file with crypto signing would solve vs. just also logging to a secure log machine. Syslog already allows one to duplicate the logging to any number of off-machine syslog daemons.

For figuring out how a breaking was done woudln't it be better to just log all IP traffic (say with "tcpdump -w ...") on a dedicated logging machine and perhaps have a pruning mechanism that trims any TCP stream to a few megabytes. That way large file transfers wouldn't fill up the logging disk unnecessarily. Add to that some off-machine logging built into sshd or perhaps the pty driver and one can get a pretty good picture of how any breakin was done.

Re:I don't know... (0)

Anonymous Coward | more than 2 years ago | (#38153402)

It ain't broke. Don't fix it.

Re:I don't know... (2)

Jack9 (11421) | more than 2 years ago | (#38152544)

What makes you think you can't use text in parallel? The idea is to have a more secure audit-log, not to replace it and leave a gaping usability hole.

Re:I don't know... (2)

jedidiah (1196) | more than 2 years ago | (#38152818)

Surely it is.

With things like Unity and Wayland and Upstart, we really shouldn't expect anything else.

I do (was: I don't know...) (4, Insightful)

Zero__Kelvin (151819) | more than 2 years ago | (#38152822)

"It's even compatible with the standard syslog interface allowing it to either coexist with or replace the usual syslog daemon with minimal disruption.

Your answer is right in the summary. I can use standard syslog in conjunction with it, and then have a process running in the background that notifies me if the integrity of the text file is violated, thereby getting the best of both worlds.

Heres a better name for the project (1)

Anonymous Coward | more than 2 years ago | (#38152192)

LiveJournal. Oh wait....

Unnecessary (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38152216)

The binary format part of this is unnecessary, at least as far as I (with limited low level programming experience) can tell. Other people have been suggesting methods which would mean you just need a cryptographic hash in each otherwise plain text line, in a standard manner. Still at least it has got a discussion started.

Pointless -- there is already a secure solution (5, Informative)

whoever57 (658626) | more than 2 years ago | (#38152220)

Set your machine to also log over a secure channel to another machine. Perhaps one that only accepts the syslog entries and no other connections. Problem solved.

Re:Pointless -- there is already a secure solution (1)

Trixter (9555) | more than 2 years ago | (#38152274)

I was just thinking the same thing. Have these guys never heard of an off-site syslog server?

Re:Pointless -- there is already a secure solution (3, Insightful)

Fallen Kell (165468) | more than 2 years ago | (#38152330)

How does that help a single stand-alone system that someone came in and rooted and then covered up their tracks? The purpose of these changes is to fix all the cases. Sure there are work arounds for some of the flaws, but that is just it, they are work arounds. This is a true fix.

Re:Pointless -- there is already a secure solution (5, Insightful)

whoever57 (658626) | more than 2 years ago | (#38152482)

How does that help a single stand-alone system that someone came in and rooted and then covered up their tracks?

Does anyone really care about forensic analysis of single stand-alone systems? Do you think that the FBI will go after whoever broke into your home system? Just rebuild the OS and move on.


This is a fix which breaks lots of other stuff. Today, I can open up my logfiles (even the compressed ones) with "vim -R ". The convenience of that will be lost and my analysis will be limited by the tools available to analyze the undocumented, binary logs. What about old log files after the binary format changes? There are so many issues with the proposal and precious few advantages.

Re:Pointless -- there is already a secure solution (1)

Anonymous Coward | more than 2 years ago | (#38153536)

Hopefully if such a system (not the house one you describe) was worth breaking into, it was probably stand-alone to prevent remote attacks in the first place.

And i would hope that someone smart enough to see the wisdom in making the system stand-alone would not be stupid enough to not have other (physical) security. (guards, cctv, door locks, alarm, other remote/out of band monitoring, etc.)

Re:Pointless -- there is already a secure solution (5, Insightful)

TheRaven64 (641858) | more than 2 years ago | (#38152492)

The way we used to solve that was to have the syslog output write to a dot-matrix (or other) line printer. Every line in the security logs is written to paper immediately. You can substitute anything that can record things written to RS-232 (cue the arduino fanboys) for the line printer.

This doesn't seem to actually solve the problem - if the person can modify the file, they can modify the file. If the lines are hashed, they just get the plaintext ones, delete the last ones, modify them, and then replay the fake ones and generate a new sequence of hashes. This just means that you need more tools in your recovery filesystem for fault diagnosis.

Re:Pointless -- there is already a secure solution (3, Insightful)

Vairon (17314) | more than 2 years ago | (#38152506)

In your stand-alone system scenario what keeps a hacker from deleting those logs entirely or reading all the logs, removing the entries they don't want preserved, then writing them all back out, with a new hash-chain history?

Re:Pointless -- there is already a secure solution (1)

Anonymous Coward | more than 2 years ago | (#38152534)

True fix. Uh-huh.

rm /var/log/syslog

Re:Pointless -- there is already a secure solution (0)

Anonymous Coward | more than 2 years ago | (#38153010)

Cannot remove 'syslog': No such file or directory

Re:Pointless -- there is already a secure solution (4, Informative)

silas_moeckel (234313) | more than 2 years ago | (#38152726)

If you have rooted the system you can parse the file remove what ya want and resign/hash everything. If you want a standalone system to have secure logging you use something that's write once, Crypto signing adds nothing unless that signing is coming from a separate system and including an external variable like a use counter so you can detect the jump. This is a solution looking for a problem. When you have a syslog box accepting udp syslog as the only open port, you can find an exploit or flood out the port.

You should be running something like splunk or octopussy to parse your syslog in real time generate alerts etc.

Re:Pointless -- there is already a secure solution (5, Informative)

rev0lt (1950662) | more than 2 years ago | (#38154094)

That's why in BSD systems, you can mark a file as append-only, and with securelevel >=1 not even root can remove the flag

Re:Pointless -- there is already a secure solution (1)

RollingThunder (88952) | more than 2 years ago | (#38152650)

It doesn't provide evidence of log tampering, so no - it's not the same thing.

Re:Pointless -- there is already a secure solution (1)

Shatrat (855151) | more than 2 years ago | (#38152652)

There is also encrypted SNMPv3 which could be used to securely and reliably send short messages in a client/server architecture.

Re:Pointless -- there is already a secure solution (1)

Zero__Kelvin (151819) | more than 2 years ago | (#38152906)

Right. Because if someone hacks one of your systems, they couldn't possibly hack your other system too. Of course, your solution also ignores the fact that this is simply not an option for many. We don't all have the cash to spend on a separate machine and secure connection, especially if there is a solution that makes it entirely unnecessary. Also, connections fail. If you can do this, why not use a belt and suspenders approach and have all three?

Overcomplicated (5, Funny)

pclminion (145572) | more than 2 years ago | (#38152236)

Back in the late 90's when I first started connecting my home Linux systems to the Internet 24/7, I logged everything imaginable. To prevent tampering/falsification of the logs, I simply printed the log on a continuous-sheet dot matrix printer. Good luck tampering with the printout in my office.

After a while I got to be able to recognize certain types of activity, such as a web user browsing to /index.html, based on the sounds the printer made.

Re:Overcomplicated (5, Funny)

Anonymous Coward | more than 2 years ago | (#38152308)

Did you ever get that OCD treated, or are you still suffering?

Re:Overcomplicated (4, Funny)

pclminion (145572) | more than 2 years ago | (#38152416)

Did you ever get that OCD treated, or are you still suffering?

That's right, every night I'd get into some cozy pajamas, maybe make a fire, cuppa tea, and sit back in a recliner for a stint of light reading. I tell you, last night's series of 404s by the guy who kept mistyping the URL to my "About Me" page were especially riveting.

Re:Overcomplicated (5, Funny)

Thing 1 (178996) | more than 2 years ago | (#38152518)

Yeah, and after a while you're like, "blonde, brunette, redhead"...

Re:Overcomplicated (4, Funny)

dickens (31040) | more than 2 years ago | (#38152318)

Yeah done that.. paper jams were a bitch, though.

I remember even going to the trouble of cutting one of the leads in the RS-232 cable to make the logging printer a true write-only device.

Wrong approach (1)

bigtrike (904535) | more than 2 years ago | (#38152400)

Everyone knows that read-only is more secure.

Re:Overcomplicated (2)

davester666 (731373) | more than 2 years ago | (#38152428)

Really? You had a dot-matrix printer that was capable of reading what it had printed?

Re:Overcomplicated (1)

dickens (31040) | more than 2 years ago | (#38152730)

It had a keyboard. I think it was a DEC LA120.

Re:Overcomplicated (1)

gatkinso (15975) | more than 2 years ago | (#38152502)

Precisely how would this impede spoofing log messages?

Re:Overcomplicated (1)

pclminion (145572) | more than 2 years ago | (#38152546)

It doesn't prevent spoofing of log messages. However, before a log message can be spoofed, the attacker must somehow gain access to the system -- hopefully, this initial access will be logged to paper BEFORE the attacker begins to try to falsify logs.

Re:Overcomplicated (1)

Vellmont (569020) | more than 2 years ago | (#38152592)

Exactly. Paper doesn't scale, and obviously is difficult to machine scan. But the point is all you need to do is send log files to an indelible medium. Paper is just the simplest one to understand. The electronic equivalent would be something like a WORM drive, or optical non-RW drive. I'm sure there's other examples that exist.

Re:Overcomplicated (1)

jedidiah (1196) | more than 2 years ago | (#38152868)

You don't even have to make it a WORM drive, you just need to to look like one. Build hardware that looks like a printer but logs to whatever you like. Make the hardware limited and don't have any other interfaces connect to the storage medium.

Re:Overcomplicated (0)

Anonymous Coward | more than 2 years ago | (#38152746)

They have firewalls now that play noise when deflecting certain types of traffic.

I think you would enjoy it.

Half-Measure (2)

ultramkancool (827732) | more than 2 years ago | (#38152250)

It doesn't really make logging more secure, you can easily just modify the entire log. Plus if someone's modifying your logs they have root permissions on your machine and then you cannot trust your system, they can put hooks on the log read to just hide certain entries if necessary. The only real solution is to NOT trust your own system - send all the data to a remote syslog server with no other services running. Why take a half-measure when you should have gone all the way?

Re:Half-Measure (1)

Zero__Kelvin (151819) | more than 2 years ago | (#38152958)

"It doesn't really make logging more secure, you can easily just modify the entire log."

Er, ah, no. You can't. That is the entire point of it.

Re:Half-Measure (2)

marcosdumay (620877) | more than 2 years ago | (#38153124)

You can. If you rooted the machine, you have all the data that this daemon has. You can calculate anything it does, including enough hashs to falsify the logs.

See: Integrity (1)

Zero__Kelvin (151819) | more than 2 years ago | (#38153290)

http://www-cs-students.stanford.edu/~blynn/gitmagic/ch08.html [stanford.edu] If this doesn't explain why you are wrong, keep googling. You'll figure it out eventually.

Re:See: Integrity (3, Informative)

ultramkancool (827732) | more than 2 years ago | (#38153328)

Your hashes don't have to match anything. This does not apply. You can just recreate the entire syslog database.

Re:See: Integrity (1)

Zero__Kelvin (151819) | more than 2 years ago | (#38153450)

In theory theory always works. In practice it often doesn't. Integrity also means that all things appear to be normal. The events have to keep coming in and must be logged in the correct format. As this happens more hashes are being folded in. If you stop that process, it becomes obvious. If you don't, you cannot calculate all your modifications in real time. Do you really think that the creators of this tool haven't thought about all of this? Seriously?

Re:See: Integrity (1)

ultramkancool (827732) | more than 2 years ago | (#38153562)

Syslog being down for seconds is not obvious, on the other hand it's very easy to say... inject into syslogd to hide your modifications in real time. I hook libc functions using trampolines to write LD_PRELOAD rootkits myself. I just don't get the point, it seems like a really shitty half-measure.

Re:See: Integrity (0)

Anonymous Coward | more than 2 years ago | (#38153638)

I highly doubt that events are coming in faster than I can rehash them.
If that were the case the bottleneck in the system would be the logging in which case no one will want to use it.
Even if I do stop logging and then simply replicate several minutes of log entries, no one will ever notice without explicitly inspecting the log files for such tampering.
If you know enough about the target you could probably even generate believable log entries.
The solution to this is to store the hashes at some other location, in which case you might as well forget about hashing and just send all the logs to the other location.

Re:See: Integrity (1)

Zero__Kelvin (151819) | more than 2 years ago | (#38153704)

"The solution to this is to store the hashes at some other location, in which case you might as well forget about hashing and just send all the logs to the other location."

What a coincidence. It is almost as if you read the article. Sending hashes periodically is indeed the correct solution. Sending a hash periodically is far better than sending the whole stream, for what should be obvious reasons.

binary format? (0)

Anonymous Coward | more than 2 years ago | (#38152294)

I don't mind having a binary format as an option, but having a text format also available is absolutely essential IMHO. Rsyslog and Syslog-ng already can write to various databases too.

The "chained hashing" is handy to catch alterations, but beyond that, this thing doesn't seem to be bringing much to the table.

Re:binary format? (1)

Vairon (17314) | more than 2 years ago | (#38152646)

The "chained hashing" sounds handy but what keeps a tool from reading the entire log and write it back out, with some log entries removed, with a new chain-hash history that validates the entries that are left?

Easy to trash a log? (0)

Anonymous Coward | more than 2 years ago | (#38152312)

If I modify a single line in the log, thereby changing it's hash, do I therefore invalidate (or worse, render unreadable) every entry that follows?

Re:Easy to trash a log? (1, Insightful)

X0563511 (793323) | more than 2 years ago | (#38152496)

You should probably go learn what a digital signature is and how it is not encryption.

Secure Syslog (0)

Anonymous Coward | more than 2 years ago | (#38152336)

It's called a dot-matrix printer.

How? (5, Insightful)

Bert64 (520050) | more than 2 years ago | (#38152364)

Log entries are "cryptographically hashed along with the hash of the previous entry in the file" resulting in a verifiable chain of entries.

So this means that in order for someone malicious to modify a log entry, all they really need to do is then re-hash all subsequent entries?

Will tail work? (1)

marcosdumay (620877) | more than 2 years ago | (#38152444)

Where will the journal be located?
Will tail on it give me any usefull information (or I'll have to read thousands of lines until finding the log of the application I want)?
How will it keep indices without uneeded overhead? (Let's get real, log files are rarely read. Why optimize for reading?)
When they change the format of the journal, will I have to update all my log parsers?

logrotate ...? (1)

fahrbot-bot (874524) | more than 2 years ago | (#38152454)

"cryptographically hashed along with the hash of the previous entry in the file"

Have fun rotating your logs!

Re:logrotate ...? (0)

Anonymous Coward | more than 2 years ago | (#38152624)

Because of course people would just keep using the same logrotate scripts with this. You are an idiot.

Not sure I like this... (1)

nine-times (778537) | more than 2 years ago | (#38152504)

I'm all for making real improvements, and I'm sure that logging could be improved in various ways. However, when I'm looking at logs, it's generally because something is broken and I want to find information on how to fix it quickly and easily. Storing something in straight text makes it extremely accessible. It's not just about using grep, which many people are accustomed to, but also because text viewers are simple. If your computer can't run programs like cat, tail, or nano, then you've got big problems. However, even if you can't run those programs for some reason, you can copy a text file to another system-- any other system-- and read it without any special software or encryption keys.

If you want to make another logging system that also tracks security-related information in a way that's easy to audit, I suppose that's worthwhile. However, if you want even basic diagnostic information to be stored in something other than plain-text, then you'd better have a simple, robust, cross-platform method of reading that data. After all, worrying about hackers is a bit of a fringe case. Most of the time, problems are caused by misconfiguration, software bugs, or bad hardware.

Use both (1)

jbov (2202938) | more than 2 years ago | (#38153042)

The summary states that it can be used with your usual syslog daemon. Therefore you can use your usual tools to analyze your logs, but you still have an audit trail to identify log tampering. The downside of this may be more disk i/o.

write-once read-many fs (1)

kipsate (314423) | more than 2 years ago | (#38152556)

In cases where avoiding tampering is crucial, just log to a write-once filesystem, or, indeed, a printer.

Send your logs to a remote/central server (3, Insightful)

Nos. (179609) | more than 2 years ago | (#38152586)

There is no real problem this solves. You are far better off logging remotely. This does not stop an attacker from hiding his tracks, you'll just know the logs were altered, but you won't know what was removed, or likely if/when you can start trusting them again. Log remotely, use encryption, and use TCP. You're central/remote logger is your trusted source for logs. You close everything except incoming logs. Parse and alert on the logs from there. Its simple to do, its real time, and solves a lot more issues than this type of solution ever will.

Very simple text-based implementation (2)

digitalderbs (718388) | more than 2 years ago | (#38152602)

Signing log messages does not need to be complicated or incompatible with current text-based logging. Hashing messages is incredibly easy to do, and there's really no reason not to do it. I just implemented this in python in less than two minutes.

>>> from hashlib import md5
>>> log = lambda last_message, message: "{}: {}".format( md5(last_message).hexdigest(), message)

The output hashes the last message with the current message:

8a023b9cbebe055e4b080585ccba3246: [ 19.609619] userif-2: sent link up event.
649a2719064f7f276462464527b48a69: [ 29.680009] eth0: no IPv6 routers present

No binaries, still grepable, single host and most importantly, there is now a trail that can be verified.

postscript on signing log messages (1)

digitalderbs (718388) | more than 2 years ago | (#38152806)

It occurred to me shortly after posting that a simple hash could easily be forged, and that a key signing of sorts would be needed to make it secure, though the system would have to be able to sign its own log messages without giving the hacker access to the signing key.

Re:Very simple text-based implementation (1)

icebraining (1313345) | more than 2 years ago | (#38152890)

Digital signing is more than hashing. You need to encrypt the hash with a private key.

Re:Very simple text-based implementation (1)

atisss (1661313) | more than 2 years ago | (#38153196)

The key must be located on the same machine for it to work. So it's not secure anyway.

Syslog on Windows (1)

C-Shalom (969608) | more than 2 years ago | (#38152690)

Secure Syslog?!?
I'm still waiting for regular syslog on Windows.

Re:Syslog on Windows (0)

Anonymous Coward | more than 2 years ago | (#38152856)

You mean Event Viewer, the thing you can read pretty much any log you wish and also customise logs for and create logs for whatever application that is capable of outputting logs?

Oh and send it to a offsite server?

Now I wonderwhy syslog doesnt exist......

Re:Syslog on Windows (2)

jbov (2202938) | more than 2 years ago | (#38153240)

Oh you mean the terribly slow to open, slow to run, tree view application that must be loaded through the GUI? The one where you have to click on each event to view the details? What happens if you can't run this application? How do you access these logs from the Windows recovery console?

Re:Syslog on Windows (1)

Xaemyl (88001) | more than 2 years ago | (#38153370)

One way is using the get-winevent cmdlet from powershell. http://blogs.technet.com/b/heyscriptingguy/archive/2011/11/14/use-custom-views-from-windows-event-viewer-in-powershell.aspx [technet.com] .

Dunno about windows recovery console however.

Thanks (1)

jbov (2202938) | more than 2 years ago | (#38153554)

That's a little better. I don't see it as comparable to syslog with unix/linux shell tools. Then again, I'm a powershell noob and would miss my vim key bindings.
I searched but couldn't locate any way to read the log files via the recovery console. Maybe (I hope) someone will enlighten me here.

Re:Syslog on Windows (1)

siride (974284) | more than 2 years ago | (#38153480)

The event log is available via API. You can write a VBScript script to dump it to the console. There are a bunch of these available online. Yeah, it's a little less convenient than having a text file readily available.

U don't know much about Windows, do you? (0)

Anonymous Coward | more than 2 years ago | (#38153900)

Local Security Policy Tool/secpol.msc has MANY options 4 logging w/in its tree items!

Follow them like so down its left-hand side pane:

Security Settings
Advanced Audit Policy Configuration
System Audit Policies - Local Group Policy Object

Beneath that last tree item in the left-hand side pane, are 10 major categories of possible auditing.

Beneath those are 57 subitems for logging as well...

* The rest can be done in other tools (e.g.-> like Windows Firewall logging for IP access etc.)

APK

P.S.=> The SAME can be accomplished from an AD Group Policy GLOBAL NETWORK LEVEL as well, using gpedit.msc/Group Policy Editor (so you don't have to manage EVERY SINGLE WORKSTATION NODE to do it, machine-by-individual-machine)!

Programming custom apps to do logging via API calls to the EventViewer's easy as well...

SO....There you go, "here endeth the lesson"

... apk

Serious issues with this (5, Insightful)

anarcat (306985) | more than 2 years ago | (#38152742)

Now, without getting into how much i dislike Pulseaudio (maybe because i'm an old UNIX fart, thank you very much), I think there are really serious issues with "The Journal", which I can summarize as such:

1. the problem it's trying to fix is already fixed
2. the problem isn't fixed by the solution
2. it makes everything more opaque
3. it makes the problem worse

The first issue is that it is trying to fix a problem that is already easily solved with existing tools: just send your darn logs to an external machine already. Syslog has supported networked logging forever.

Second, if you log on a machine and that machine gets compromised, I don't see how having checksums and a chained log will keep anyone from just running trashing the whole 'journal'.

rm -rf /var/log

What am i missing here?

Third, this implements yet another obscure and opaque system that keeps the users away from how their system works, making everything available only through a special tool (the journal), which depends on another special tool (systemd), both of which are already controversial. I like grepping my logs. I understand http://logcheck.org [slashdot.org] and similar tools are not working very well, but that's because there isn't a common format for logging, which makes parsing hard and application dependent. From what I understand, this is not something The Journal is trying to address either. To take an example from their document:

MESSAGE=User harald logged in
MESSAGE_ID=422bc3d271414bc8bc9570f222f24a9
_EXE=/lib/systemd/systemd-logind
[... 14 lines of more stuff snipped]

(Nevermind for a second the fact that to carry the same amount of information, syslog only needs one line (not 14), which makes things actually readable by humans.)

The actual important bit here is "User harald logged in". But the thing we want to know is: is that a good thing or a bad thing? If it was "User harald login failed", would it be flagged as such? It's not in the current objectives, it seems, to improve the system in that direction. I would rather see a common agreement on syntax and keywords to use, and respect for the syslog levels [debian.net] (e.g. EMERG, ALERT, ..., INFO, DEBUG), than reinventing the wheel like this.

Fourth, what happens when our happy cracker destroys those tools? This is a big problem for what they are actually trying to solve, especially since they do not intend to make the format standard, according to the design document [google.com] (published on you-know-who, unfortunately). So you could end up in a situation where you can't parse those logs because the machine that generated them is gone, and you would need to track down exactly which version of the software generated it. Good luck with that.

I'll pass. Again.

Re:Serious issues with this (1)

skids (119237) | more than 2 years ago | (#38153046)

Now, without getting into how much i dislike Pulseaudio

Hey, I would have gladly listened to that. Even JACK is picking up bad habits from that pile of crap.

I would rather see a common agreement on syntax and keywords to use, and respect for the syslog levels

Here here. But then, it's much harder to be a leader in getting people to do things they should have been doing long ago than it is to lead by saying :"here's my bright new shiny object."

Re:Serious issues with this (4, Informative)

jbov (2202938) | more than 2 years ago | (#38153320)

I can mostly agree with you. There is one thing you might be missing.

Second, if you log on a machine and that machine gets compromised, I don't see how having checksums and a chained log will keep anyone from just running trashing the whole 'journal'.
rm -rf /var/log
What am i missing here?

Fourth, what happens when our happy cracker destroys those tools?

I think what you are missing is this replacement is intended to prevent "undetected" tampering with the logs. Currently, a cracker can delete the log entries that would identify his or her activities on the machine, thereby going unnoticed. Deleting the log files or destroying the tools, as you suggested, would certainly be a detectable sign that the machine was compromised.

Re:Serious issues with this (1)

Ed Avis (5917) | more than 2 years ago | (#38153338)

As I understand it, you might not want to send the whole log activity across the network (imagine a mobile device, say) but you still want to get the security against tampering that this provides. So instead you just send a cryptographic hash of the whole journal once a day - or even print it out to a dot matrix printer as someone else suggested. You can then use that hash to check the whole journal hasn't been tampered with since the hash was generated. Second, of course this does not provide security against someone nuking the whole log. But if you see the whole of /var/log is gone, that's already a pretty strong indication that something is wrong with your machine. The attack guarded against is someone breaking in and sneakily modifying past log entries to hide their traces. Third, yes it would be harder to grep than a plain text file. Luckily, Unix also has the concept of pipes, so I guess it won't be any harder than 'journalcat | grep pattern' where journalcat is the tool that spools out the whole journal as text. That should be good enough. Fourth, if your system is potentially compromised then of course you cannot trust that system to give you an honest answer about what the logs contain. That is equally true with plaintext syslog or any logging system restricted to the local machine. You can, however, take a copy of the whole log entry, put it on a clean machine and analyse it there. The advantage over syslog is that you can use the cryptographic hash (which you were taking a copy of every 24 hours, as above) to check that the journal is uncorrupted. If somebody has tried to mess with the log, they won't be able to do so without you noticing. "The Journal" has other advantages over syslog, including some measure of checking who is logging what (so you can't start a random process and claim to be apache on port 80 for the purpose of log messages).

Re:Serious issues with this (2)

AmbushBug (71207) | more than 2 years ago | (#38153578)

First issue: This is great if you have an external system to log to - if not, you're boned. This new logging system seems to cover both cases.

Second issue: One of the big reasons for doing this is to be able to detect when the log has been altered to cover a crackers tracks. Obviously, a deleted log file is easily detected and a big indicator that your system has been compromised, so I'm not seeing your point here.

Third issue: As has been stated above, you can log to both the Journal and good old text based log files. That way you can still use your existing tools on the text file while still being notified of log file alteration. I agree that a common format for log entries would be nice but may not be possible since not every application logs the same kind of data. Note also that this proposal allows for arbitrary key/value pairs so some standard conventions will probably come about after its been used for a while.

Fourth issue: Not sure I understand what you are talking about here... Obviously, backward compatibility will have to be taken into account by the devs. You should be able to read the files on other machines if you backed up your encryption keys, etc. (you do backup that stuff right?). By reading the articles, it sounds like the devs have thought about these issues and/or they have already been raised by others. They seem to be fairly easy to deal with.

I'm not necessarily on board with this proposed system either, but your issues seem like they've already been covered by the proposed design.

Re:Serious issues with this (3, Insightful)

rnturn (11092) | more than 2 years ago | (#38154060)

``This is great if you have an external system to log to - if not, you're boned.''

Seriously, how hard it is to set one of these up? Not very. How expensive is to do this? Not very. Are we going to toss out the current method of logging because of the folks who only have Linux running on a laptop and have that as their only computer?

You certainly would not need a tremendously powerful PC to sit out on your network and do nothing but accept syslog messages from other systems.

``you can log to both the Journal and good old text based log files. That way you can still use your existing tools on the text file while still being notified of log file alteration''

My understanding (someone correct me if I'm wrong on this) is that there will be only a single logging system, not one doing this Journal format and another for text logs. The text available from the Journal would have to come from a tool that uses certain new library calls to extract information from the Journal. Users would have to pipe the output of that, one supposes, into tools to search for error messages of interest. It's not terribly hard to use but...

``backward compatibility will have to be taken into account by the devs''

Not necessarily. Several of the summaries I've read about this new logging system indicate the the format hasn't been agreed on and may change from time to time. And... there is no guarantee when they'll get around to documenting the format. Good grief! First we have to change all of our log file search scripts to use the new Journal dumping tool. Then the format changes so we have to modify our scripts again. And again, perhaps, whenever it suits Lennart. How nice!

Re:Serious issues with this (0)

Anonymous Coward | more than 2 years ago | (#38154198)

This is great if you have an external system to log to - if not, you're boned.

Everyone can have an external system to log to. In my case it's my OpenWRT router with attached usb disk, colo/hosting providers should be able to provide a central loghost. If a company has a single-device IT configuration they have more serious problems than logging.

One of the big reasons for doing this is to be able to detect when the log has been altered

And you can't do that by modifying existing tools? Like, say, add hashes to the already existing syslog?

As has been stated above, you can log to both the Journal and good old text based log file

Then of what use is the journal? Bear in mind that rsyslog already supports database backends, so fast-search by indexing has already been done.

You should be able to read the files on other machines if you backed up your encryption keys

No, you're missing the point. The binary format is undocumented and subject to change, they make no claims about backwards compatibility. So if you've lost the exact version of journald that created the logs, there is no guarantee that a different version will be able to succesfully parse the logs. Additionally, journald uses deduplcation, e.g. it replaces an application name by a dynamically generated token. Good luck browsing your logs if the token dictionaries have been lost.

Absurd (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38152858)

From the FAQ:

we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly.

Not only does it generate logfiles that are not human-readable, they're also in a format that in two years not even their own tool will be able to read. If it is still around in two years, which I doubt.

Re:Absurd (0)

Anonymous Coward | more than 2 years ago | (#38153762)

Why? It fits perfectly into the Linux world: obscure, not compatible to itself in less than a month and yet another variation of something but still half-assed so the next variation has a chance to emerge.

It's all idiots not understanding a thing and thinking they came up with the ultimate solution.

Recalculate all hashes? (0)

Anonymous Coward | more than 2 years ago | (#38153078)

Can someone explain to me why you can't simply edit the log entry you want to change, and then recalculate every hash for the rest of the file?
Unless you want to change some log entry from months ago with gigabytes of log entries to re-hash this should be doable right?
Unless you keep some additional copy somewhere to compare to, in which case the hash doesn't really add anything.

Funny typo in the design docs (1)

Superken7 (893292) | more than 2 years ago | (#38153214)

"journal all entries are cryptographically hashed along with the hash of the previous entry in the file. This results in a chain of entries, where each entry authenticates all previous ones. If the top-most hash is regularly saved to a secure write-only location, the full chain is authenticated by it." (emphasis mine)

Nice security, erm, feature..?

Re:Funny typo in the design docs (1)

Superken7 (893292) | more than 2 years ago | (#38153266)

I'm thinking that maybe this isn't a typo, and its intended to just avoid reading. But how would you prevent it from being overwritten, thus producing a new, rebuilt, forged but apparently cryptographically correct hashed logfile?

GNOME 3 crack (4, Insightful)

David Gerard (12369) | more than 2 years ago | (#38153674)

This is on the same crack as the rest of GNOME 3. They've invented the Windows event log, well done! Now I hand you a trashed system, but you can read the disk. You look into /var/log/syslog ... no, you don't. "We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly. The access is granted by a shared library and a command line tool."

Speaking as a sysadmin, I shudder at this incredibly stupid idea. Are they even thinking of how to get something actually readable in disaster?

Serioulsy? (5, Insightful)

cdukes (709042) | more than 2 years ago | (#38153872)

Is this a joke? Or is it someone just trying to push their ideology of what they think should be done to the rest of the world to make their idea a standard?

Doing something like this would be a sure way for Linux to shoot itself in the foot. For evidence, one only needs to look as far as Microsoft who insists on doing it their special way and expecting everyone else to do what they deem as "good". The concept of syslog messages are that they are meant to be 'open' so disparate systems can read the data. How to you propose to integrate with large syslog reporting/analysis tools like LogZilla (http://www.logzilla.pro)?

The authors are correct that a format needs to be written so that parsing is easier. But how is their solution any "easier"? Instead, there is a much more effective solution available known as CEE (http://cee.mitre.org/) that proposes to include fields in the text.

> Syslog data is not authenticated.
If you need that, then use TLS/certificates. when logging to a centralized host.

>Syslog is only one of many logging systems on a Linux machine.
Surely you're aware of syslog-ng and rsyslog.

Access control to the syslogs is non-existent.
> To locally stored logs? Maybe (if you don't chown them to root?)
> But, if you are using syslog-ng or rsyslog and sending to a centralized host., then what is "local" to the system becomes irrelevant.

Disk usage limits are only applied at fixed intervals, leaving systems vulnerable to DDoS attacks.
> Again, a moot point if admins are doing it correctly by centralizing with tools like syslog-ng, rsyslog and LogZilla.

>"For example, the recent, much discussed kernel.org intrusion involved log file manipulation which was only detected by chance."
Oh, you mean they weren't managing their syslog properly so they got screwed and blamed their lack of management on the protocol itself. Ok, yeah, that makes sense.

They also noted in their paper that " In a later version we plan to extend the journal minimally to support live remote logging, in both PUSH and PULL modes always using a local journal as buffer for a store-and-forward logic"
I can't understand how this would be an afterthought. They are clearly thinking "locally" rather than globally. Plus, if it is to eventually be able to send, what format will it use? Text? Ok, now they are back to their original complaint.

All of this really just makes me cringe. If RH/Fedora do this, there is no way for people that manage large system infrastructures to include those systems in their management. I am responsible for managing over 8,000 Cisco devices on top of several hundred linux systems. Am I supposed to log on to each linux server to get log information?

Doesn't seem to be very useful (1)

vadim_t (324782) | more than 2 years ago | (#38153964)

It seems pointless. If somebody already has enough privileges on your server to mess with the logs, how is a hash going to help? There's a whole bunch of things an attacker can do that makes this useless.

Most obviously, they can corrupt or erase the contents of the file. Noticeable, but the traces of you accessing can be deleted, so that the admin can't figure out who did it.

The attacker can save an old file, do whatever needs hiding, and replace the file with the old copy. Depending on how it works this may result in logging being continued to the replaced file, or the log daemon keeping to write into a now nameless file, while an old one is visible in the directory instead.

The hash seems pointless. If the attacker can modify the logs directly, they likely have root access, which means they can debug any process, and subvert any cryptography that might be happening. They can also regenerate the log file with the correct hashes but with a few deleted lines, or replace the daemon with one that doesn't log some things.

What else will he break (0)

Anonymous Coward | more than 2 years ago | (#38154108)

Oh, look, Pottering is breaking something else. It must be a hobby of his.

What Linux really could use for secure logging is mounting /var/log as an append-only file system. If you can only read from and append to a file, it makes it awfully difficult to tamper with it. http://www.freebsd.org/doc/handbook/securing-freebsd.html

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...