Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Linux

Secure Syslog Replacement Proposed 248

LinuxScribe writes with this bit from IT World: "In an effort to foil crackers' attempts to cover their tracks by altering text-based syslogs, and improve the syslog process as a whole, developers Lennart Poettering and Kay Sievers are proposing a new tool called The Journal. Using key/value pairs in a binary format, The Journal is already stirring up a lot of objections." Log entries are "cryptographically hashed along with the hash of the previous entry in the file" resulting in a verifiable chain of entries. This is being done as an extension to systemd (git branch). The design doesn't just make logging more secure, but introduces a number of overdue improvements to the logging process. It's even compatible with the standard syslog interface allowing it to either coexist with or replace the usual syslog daemon with minimal disruption.
This discussion has been archived. No new comments can be posted.

Secure Syslog Replacement Proposed

Comments Filter:
  • I don't know... (Score:5, Informative)

    by ksd1337 ( 1029386 ) on Wednesday November 23, 2011 @04:50PM (#38152154)
    Text is damn convenient to use. How are you gonna grep a binary file?
    • Re:I don't know... (Score:5, Informative)

      by Anonymous Coward on Wednesday November 23, 2011 @04:53PM (#38152194)

      journalgrep -e "Nov 0[1234]-[0-9][0-9]-2011" | less

    • Re:I don't know... (Score:5, Insightful)

      by iluvcapra ( 782887 ) on Wednesday November 23, 2011 @04:57PM (#38152252)

      Witness the deeply-ingrained UNIX Philosophy thing where if you can't use grep(1), it naturally follows that the thing is impossible to search.

      You can't grep a Berkeley DB, yet for some reason you can find stuff in it, too.

      • by Anonymous Coward on Wednesday November 23, 2011 @05:13PM (#38152488)
        You can't grep a Berkeley DB, yet for some reason you can find stuff in it, too.

        strings berkeley.db | grep "data"

        Enjoy,

      • Re:I don't know... (Score:5, Insightful)

        by quanticle ( 843097 ) on Wednesday November 23, 2011 @05:14PM (#38152498) Homepage

        The problem isn't searching in the ordinary case. The problem is searching in the failure case. I can grep a truncated, mangled text file. If I truncate and mangle your BerkeleyDB can you still search it?

      • Re:I don't know... (Score:5, Insightful)

        by TheRaven64 ( 641858 ) on Wednesday November 23, 2011 @05:18PM (#38152536) Journal
        Grep is just one example. Grep lets me search any text file. Tail -f lets me watch anything that's added to it. Wc -l lets me enumerate the entries in it. Awk lets me extract elements from it. There are lots of other standard UNIX utilities for manipulating text files. If you are replacing a text file with a binary file then you need to provide equivalent functionality to all of these. If this file is one that is important for system recovery, then you need to put all of these in the root partition, without significantly increasing its size. These are not insurmountable problems, but they are problems.
        • This is the first thing I thought too. But if I'm going to have many piped commands, why not add one more that cats the thing in text format? I can't think of a reason to why that would be inconvenient. My rotated syslog is gzipped and I can just zcat it, or even cat | gunzip - | whatever it. So the slight inconvenience *might* well be outweighed by the new benefits.

        • My argument in that case is that grep is insufficiently expressive for the modern world.

          The UNIX "everything is a file, and those files are all ^J delimited records" is a hack for a world when record based file systems were seen as overcomplicated (because, well, MULTICS style files were overcomplicated). It's time to move up [gnu.org].

          • Re:I don't know... (Score:5, Insightful)

            by hedwards ( 940851 ) on Wednesday November 23, 2011 @06:40PM (#38153410)

            I disagree, the fact that such a model still works so well decades later is definitely evidence that they were doing something right. When it comes down to it, if you make everything a file then you don't have to worry about envisioning niche uses as most of them can be accomplished by chaining together several commands. The ones that don't are still not impossible as you can just throw together a Perl script or similar to manage them.

            • jobs. (Score:5, Funny)

              by mevets ( 322601 ) on Wednesday November 23, 2011 @07:43PM (#38153984)

              Attitudes like yours cost the industry jobs. It is best for if we store data away into increasingly inappropriate places so that lusers have to pay us to get their own data.

              Hell, going back to standard data formats and reusable tools would be the death of a thousand increasingly bizarre specialty languages alone.

              As a penance, you should rewrite diff in python to work on sqlite databases. That should set the industry back another few years.

            • A counterargument would be that if we had a minimally structured container format that everyone used you could save a lot of time. Untyped binary blobs work as a lowest common denominator format, but they can be a huge pain to work with. Personally I wish we'd move to a more structured model for data (the relational people have a point) -- I find it far more pleasant to e.g. write an SQL query than to hack together a script to munge data from five different file formats to get the output I need.

              But, of cour

              • Yes, but the reason we wouldn't do that is because if most things are human readable you can just manually do the edits your self if you need to. Otherwise it requires some sort of justification as other formats have been constantly changed over the decades to keep up with advancements.

                It's a distortion to suggest that the model doesn't work when clearly it does. The formats of some types of files and devices have changed, but the way they interact hasn't. Most of the time I spend fixing Windows is because

                • It's a distortion to suggest that the model doesn't work when clearly it does.

                  It sort of works (see all the issues mentoined already) -- that doesn't mean that it can't be improved upon, nor that this partiuclar idea is a bad one.

                  The major problem with the UNIX philosophy in this area is that lots and lots of programs now contain informally specified (i.e. specified by implementation), badly written ad-hoc parsers for the output of other programs. This leads to lots of minor, but amazingly annoying issues(

                  • It's a distortion to suggest that the model doesn't work when clearly it does. It sort of works (see all the issues mentoined already) -- that doesn't mean that it can't be improved upon, nor that this partiuclar idea is a bad one.

                    It sort of does not work. All the greybeards tool stop working, or require arcane workarounds, as soon as you have a single fucking whitespace in the filename. The only reason why people does not find it excruciatingly annoying to write shell scripts is because such issues are

        • Re:I don't know... (Score:5, Interesting)

          by pclminion ( 145572 ) on Wednesday November 23, 2011 @06:24PM (#38153278)

          If you are replacing a text file with a binary file then you need to provide equivalent functionality to all of these.

          No, I just need to provide a bin2txt program. The UNIX philosophy, I think you missed it. It's based on simple, self-contained, modular components, not some "everything is just text!" fantasy.

        • by makomk ( 752139 )

          Don't worry - soon Lennart's going to kill off the ability to have seperate root and /usr partitions anyway. After all, who really needs it?

      • Not being able to grep the logs would suck. It would break every hack script I have for checking things in the logs.

        Furthermore, I'm not sure what problem the binary file with crypto signing would solve vs. just also logging to a secure log machine. Syslog already allows one to duplicate the logging to any number of off-machine syslog daemons.

        For figuring out how a breaking was done woudln't it be better to just log all IP traffic (say with "tcpdump -w ...") on a dedicated logging machine and perhaps have

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          The more secure thing to do with logs is ship them to another host. This idea of signing log messages is ridiculous, because you will just have intruders signing their messages with the keys you thoughtfully provided.

          I suspect this is really just the preference of the systemd dudes.

    • by Jack9 ( 11421 )

      What makes you think you can't use text in parallel? The idea is to have a more secure audit-log, not to replace it and leave a gaping usability hole.

      • by jedidiah ( 1196 )

        Surely it is.

        With things like Unity and Wayland and Upstart, we really shouldn't expect anything else.

      • by makomk ( 752139 )

        From the official FAQ linked in the summary:

        My application needs traditional text log files on disk, can I configure journald to generate those?
        No, you can’t. If you need this, just run the journal side-by-side with a traditional syslog implementation like rsyslog which can generate this file for you.
        Why doesn’t the journal generate traditional log files?
        Well, for starters, traditional log files are not indexed, and many key operations very slow with a complexity of O(n). The native journal file format allows O(log(n)) complexity or better for all important operations. For more reasons, refer to the sections above.

        So yes, the idea is to totally replace plain-text logging and leave a gaping usability hole. The suggested workaround will only work until software starts using The Journal directly without going through syslog, which is I think the end plan since syslog presumably can't support all the neat extra functionality.

    • by Zero__Kelvin ( 151819 ) on Wednesday November 23, 2011 @05:45PM (#38152822) Homepage

      "It's even compatible with the standard syslog interface allowing it to either coexist with or replace the usual syslog daemon with minimal disruption.

      Your answer is right in the summary. I can use standard syslog in conjunction with it, and then have a process running in the background that notifies me if the integrity of the text file is violated, thereby getting the best of both worlds.

  • Unnecessary (Score:2, Interesting)

    by Anonymous Coward

    The binary format part of this is unnecessary, at least as far as I (with limited low level programming experience) can tell. Other people have been suggesting methods which would mean you just need a cryptographic hash in each otherwise plain text line, in a standard manner. Still at least it has got a discussion started.

    • Apparantly the binary format it there to support fields and log lines that normally wouldn't look nice as a single text row.
  • by whoever57 ( 658626 ) on Wednesday November 23, 2011 @04:55PM (#38152220) Journal
    Set your machine to also log over a secure channel to another machine. Perhaps one that only accepts the syslog entries and no other connections. Problem solved.
    • by Trixter ( 9555 )
      I was just thinking the same thing. Have these guys never heard of an off-site syslog server?
    • by Fallen Kell ( 165468 ) on Wednesday November 23, 2011 @05:01PM (#38152330)
      How does that help a single stand-alone system that someone came in and rooted and then covered up their tracks? The purpose of these changes is to fix all the cases. Sure there are work arounds for some of the flaws, but that is just it, they are work arounds. This is a true fix.
      • by whoever57 ( 658626 ) on Wednesday November 23, 2011 @05:13PM (#38152482) Journal

        How does that help a single stand-alone system that someone came in and rooted and then covered up their tracks?

        Does anyone really care about forensic analysis of single stand-alone systems? Do you think that the FBI will go after whoever broke into your home system? Just rebuild the OS and move on.


        This is a fix which breaks lots of other stuff. Today, I can open up my logfiles (even the compressed ones) with "vim -R ". The convenience of that will be lost and my analysis will be limited by the tools available to analyze the undocumented, binary logs. What about old log files after the binary format changes? There are so many issues with the proposal and precious few advantages.

      • by TheRaven64 ( 641858 ) on Wednesday November 23, 2011 @05:14PM (#38152492) Journal

        The way we used to solve that was to have the syslog output write to a dot-matrix (or other) line printer. Every line in the security logs is written to paper immediately. You can substitute anything that can record things written to RS-232 (cue the arduino fanboys) for the line printer.

        This doesn't seem to actually solve the problem - if the person can modify the file, they can modify the file. If the lines are hashed, they just get the plaintext ones, delete the last ones, modify them, and then replay the fake ones and generate a new sequence of hashes. This just means that you need more tools in your recovery filesystem for fault diagnosis.

      • by Vairon ( 17314 ) on Wednesday November 23, 2011 @05:15PM (#38152506)

        In your stand-alone system scenario what keeps a hacker from deleting those logs entirely or reading all the logs, removing the entries they don't want preserved, then writing them all back out, with a new hash-chain history?

      • If you have rooted the system you can parse the file remove what ya want and resign/hash everything. If you want a standalone system to have secure logging you use something that's write once, Crypto signing adds nothing unless that signing is coming from a separate system and including an external variable like a use counter so you can detect the jump. This is a solution looking for a problem. When you have a syslog box accepting udp syslog as the only open port, you can find an exploit or flood out the port.

        You should be running something like splunk or octopussy to parse your syslog in real time generate alerts etc.

      • by rev0lt ( 1950662 ) on Wednesday November 23, 2011 @07:59PM (#38154094)
        That's why in BSD systems, you can mark a file as append-only, and with securelevel >=1 not even root can remove the flag
    • It doesn't provide evidence of log tampering, so no - it's not the same thing.

      • It doesn't provide evidence of log tampering, so no - it's not the same thing.

        Since local log files can also be stored and then compared against those captured on the server, yes, it does provide evidence of log tampering.

      • by alcourt ( 198386 )

        PCI not only accepts, but mandates central log storage. Recommendation is to do it using real time log transmission. Log tampering detection is expected on even the central system, where you use syslog to receive, and then store the central copy in something a bit fancier than a text file, often a database of some kind.

        Local to a system, there is no such thing as acceptable tamper resistant unless it is a write once/read many drive that is physically that way.

        Centralize the logs, and stop messing with my

    • by Shatrat ( 855151 )
      There is also encrypted SNMPv3 which could be used to securely and reliably send short messages in a client/server architecture.
    • Right. Because if someone hacks one of your systems, they couldn't possibly hack your other system too. Of course, your solution also ignores the fact that this is simply not an option for many. We don't all have the cash to spend on a separate machine and secure connection, especially if there is a solution that makes it entirely unnecessary. Also, connections fail. If you can do this, why not use a belt and suspenders approach and have all three?
      • by alcourt ( 198386 ) on Wednesday November 23, 2011 @10:27PM (#38154938)

        In a well designed network, a compromise of a target system does not give one increased ability to compromise the log system, because there is no trust relationship, and the central log host does not even have the same user accounts. No user who has an account on the production system is permitted to have an account on the log system.

        This topic is basic PCI stuff, and common also for SOX compliance. The problems are far from complex.

        The so called solution does not provide sufficient security based on description to eliminate the requirement for central log storage, especially since that is an explicit requirement of PCI. Some may have that as an explicit SOX control as well. The obvious problem with the tool is it is only tamper detection, not tamper protection. Any fool with root can erase the evidence that they were the one logged in. It may be a tad harder to hide the fact that the logs were modified, but even that could be bypassed with the above description by simple virtue of rotating the log post-compromise and "losing" the entries in question.

    • Yes and no.
      How do you know if you lost messages? How do you know if some messages are removed?

      Well, you don't. Some apps include a counter for this very reason AFAIK.

      Regardless, in all cases, if someone compromises the logger, he can also make proper hashs/counts.

      But until he does, being sure you get all messages is quite important.

      (not saying that systemd is the right solution, but thats a problem)

  • by pclminion ( 145572 ) on Wednesday November 23, 2011 @04:56PM (#38152236)

    Back in the late 90's when I first started connecting my home Linux systems to the Internet 24/7, I logged everything imaginable. To prevent tampering/falsification of the logs, I simply printed the log on a continuous-sheet dot matrix printer. Good luck tampering with the printout in my office.

    After a while I got to be able to recognize certain types of activity, such as a web user browsing to /index.html, based on the sounds the printer made.

    • by Anonymous Coward on Wednesday November 23, 2011 @05:00PM (#38152308)

      Did you ever get that OCD treated, or are you still suffering?

    • by dickens ( 31040 ) on Wednesday November 23, 2011 @05:01PM (#38152318) Homepage

      Yeah done that.. paper jams were a bitch, though.

      I remember even going to the trouble of cutting one of the leads in the RS-232 cable to make the logging printer a true write-only device.

    • Exactly. Paper doesn't scale, and obviously is difficult to machine scan. But the point is all you need to do is send log files to an indelible medium. Paper is just the simplest one to understand. The electronic equivalent would be something like a WORM drive, or optical non-RW drive. I'm sure there's other examples that exist.

      • by jedidiah ( 1196 )

        You don't even have to make it a WORM drive, you just need to to look like one. Build hardware that looks like a printer but logs to whatever you like. Make the hardware limited and don't have any other interfaces connect to the storage medium.

  • How? (Score:5, Insightful)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Wednesday November 23, 2011 @05:04PM (#38152364) Homepage

    Log entries are "cryptographically hashed along with the hash of the previous entry in the file" resulting in a verifiable chain of entries.

    So this means that in order for someone malicious to modify a log entry, all they really need to do is then re-hash all subsequent entries?

  • Where will the journal be located?
    Will tail on it give me any usefull information (or I'll have to read thousands of lines until finding the log of the application I want)?
    How will it keep indices without uneeded overhead? (Let's get real, log files are rarely read. Why optimize for reading?)
    When they change the format of the journal, will I have to update all my log parsers?

  • "cryptographically hashed along with the hash of the previous entry in the file"

    Have fun rotating your logs!

  • I'm all for making real improvements, and I'm sure that logging could be improved in various ways. However, when I'm looking at logs, it's generally because something is broken and I want to find information on how to fix it quickly and easily. Storing something in straight text makes it extremely accessible. It's not just about using grep, which many people are accustomed to, but also because text viewers are simple. If your computer can't run programs like cat, tail, or nano, then you've got big probl

  • In cases where avoiding tampering is crucial, just log to a write-once filesystem, or, indeed, a printer.
  • by Nos. ( 179609 ) <andrewNO@SPAMthekerrs.ca> on Wednesday November 23, 2011 @05:24PM (#38152586) Homepage

    There is no real problem this solves. You are far better off logging remotely. This does not stop an attacker from hiding his tracks, you'll just know the logs were altered, but you won't know what was removed, or likely if/when you can start trusting them again. Log remotely, use encryption, and use TCP. You're central/remote logger is your trusted source for logs. You close everything except incoming logs. Parse and alert on the logs from there. Its simple to do, its real time, and solves a lot more issues than this type of solution ever will.

  • by digitalderbs ( 718388 ) on Wednesday November 23, 2011 @05:26PM (#38152602)
    Signing log messages does not need to be complicated or incompatible with current text-based logging. Hashing messages is incredibly easy to do, and there's really no reason not to do it. I just implemented this in python in less than two minutes.

    >>> from hashlib import md5
    >>> log = lambda last_message, message: "{}: {}".format( md5(last_message).hexdigest(), message)

    The output hashes the last message with the current message:

    8a023b9cbebe055e4b080585ccba3246: [ 19.609619] userif-2: sent link up event.
    649a2719064f7f276462464527b48a69: [ 29.680009] eth0: no IPv6 routers present

    No binaries, still grepable, single host and most importantly, there is now a trail that can be verified.

    • It occurred to me shortly after posting that a simple hash could easily be forged, and that a key signing of sorts would be needed to make it secure, though the system would have to be able to sign its own log messages without giving the hacker access to the signing key.
    • Digital signing is more than hashing. You need to encrypt the hash with a private key.

  • by anarcat ( 306985 ) on Wednesday November 23, 2011 @05:39PM (#38152742) Homepage

    Now, without getting into how much i dislike Pulseaudio (maybe because i'm an old UNIX fart, thank you very much), I think there are really serious issues with "The Journal", which I can summarize as such:

    1. the problem it's trying to fix is already fixed
    2. the problem isn't fixed by the solution
    2. it makes everything more opaque
    3. it makes the problem worse

    The first issue is that it is trying to fix a problem that is already easily solved with existing tools: just send your darn logs to an external machine already. Syslog has supported networked logging forever.

    Second, if you log on a machine and that machine gets compromised, I don't see how having checksums and a chained log will keep anyone from just running trashing the whole 'journal'.

    rm -rf /var/log

    What am i missing here?

    Third, this implements yet another obscure and opaque system that keeps the users away from how their system works, making everything available only through a special tool (the journal), which depends on another special tool (systemd), both of which are already controversial. I like grepping my logs. I understand http://logcheck.org [slashdot.org] and similar tools are not working very well, but that's because there isn't a common format for logging, which makes parsing hard and application dependent. From what I understand, this is not something The Journal is trying to address either. To take an example from their document:

    MESSAGE=User harald logged in
    MESSAGE_ID=422bc3d271414bc8bc9570f222f24a9
    _EXE=/lib/systemd/systemd-logind
    [... 14 lines of more stuff snipped]

    (Nevermind for a second the fact that to carry the same amount of information, syslog only needs one line (not 14), which makes things actually readable by humans.)

    The actual important bit here is "User harald logged in". But the thing we want to know is: is that a good thing or a bad thing? If it was "User harald login failed", would it be flagged as such? It's not in the current objectives, it seems, to improve the system in that direction. I would rather see a common agreement on syntax and keywords to use, and respect for the syslog levels [debian.net] (e.g. EMERG, ALERT, ..., INFO, DEBUG), than reinventing the wheel like this.

    Fourth, what happens when our happy cracker destroys those tools? This is a big problem for what they are actually trying to solve, especially since they do not intend to make the format standard, according to the design document [google.com] (published on you-know-who, unfortunately). So you could end up in a situation where you can't parse those logs because the machine that generated them is gone, and you would need to track down exactly which version of the software generated it. Good luck with that.

    I'll pass. Again.

    • by skids ( 119237 )

      Now, without getting into how much i dislike Pulseaudio

      Hey, I would have gladly listened to that. Even JACK is picking up bad habits from that pile of crap.

      I would rather see a common agreement on syntax and keywords to use, and respect for the syslog levels

      Here here. But then, it's much harder to be a leader in getting people to do things they should have been doing long ago than it is to lead by saying :"here's my bright new shiny object."

    • by jbov ( 2202938 ) on Wednesday November 23, 2011 @06:29PM (#38153320)
      I can mostly agree with you. There is one thing you might be missing.

      Second, if you log on a machine and that machine gets compromised, I don't see how having checksums and a chained log will keep anyone from just running trashing the whole 'journal'.
      rm -rf /var/log
      What am i missing here?

      Fourth, what happens when our happy cracker destroys those tools?

      I think what you are missing is this replacement is intended to prevent "undetected" tampering with the logs. Currently, a cracker can delete the log entries that would identify his or her activities on the machine, thereby going unnoticed. Deleting the log files or destroying the tools, as you suggested, would certainly be a detectable sign that the machine was compromised.

      • I can mostly agree with you. There is one thing you might be missing.

        [...]

        I think what you are missing is this replacement is intended to prevent "undetected" tampering with the logs. Currently, a cracker can delete the log entries that would identify his or her activities on the machine, thereby going unnoticed. Deleting the log files or destroying the tools, as you suggested, would certainly be a detectable sign that the machine was compromised.

        My point is: even with git if someone has access to the repository it *can* be tampered with. It's harder and may take longer than with a plain text file, but it's completely possible. With git, there's even an easy way to do it (git rebase) and I suspect that cracking toolkit will adapt and also make that easier. Note that I assume here that you save the first hash of the tree to a secure location, as documented:

        Inspired by git, in the journal all entries are cryptographically hashed along with the hash of the previous entry in the file. This results in a chain of entries, where each entry authenticates all previous ones. If the top-most hash is regularly saved to a secure write-only location, the full chain is authenticated by it. Manipulations by the attacker can hence easily be detected.

        If only the topmost hash is saved to a backup location, then I just need to reroll all the logs

    • First issue: This is great if you have an external system to log to - if not, you're boned. This new logging system seems to cover both cases.

      Second issue: One of the big reasons for doing this is to be able to detect when the log has been altered to cover a crackers tracks. Obviously, a deleted log file is easily detected and a big indicator that your system has been compromised, so I'm not seeing your point here.

      Third issue: As has been stated above, you can log to both the Journal and good old text based

      • by rnturn ( 11092 ) on Wednesday November 23, 2011 @07:56PM (#38154060)

        ``This is great if you have an external system to log to - if not, you're boned.''

        Seriously, how hard it is to set one of these up? Not very. How expensive is to do this? Not very. Are we going to toss out the current method of logging because of the folks who only have Linux running on a laptop and have that as their only computer?

        You certainly would not need a tremendously powerful PC to sit out on your network and do nothing but accept syslog messages from other systems.

        ``you can log to both the Journal and good old text based log files. That way you can still use your existing tools on the text file while still being notified of log file alteration''

        My understanding (someone correct me if I'm wrong on this) is that there will be only a single logging system, not one doing this Journal format and another for text logs. The text available from the Journal would have to come from a tool that uses certain new library calls to extract information from the Journal. Users would have to pipe the output of that, one supposes, into tools to search for error messages of interest. It's not terribly hard to use but...

        ``backward compatibility will have to be taken into account by the devs''

        Not necessarily. Several of the summaries I've read about this new logging system indicate the the format hasn't been agreed on and may change from time to time. And... there is no guarantee when they'll get around to documenting the format. Good grief! First we have to change all of our log file search scripts to use the new Journal dumping tool. Then the format changes so we have to modify our scripts again. And again, perhaps, whenever it suits Lennart. How nice!

      • by anarcat ( 306985 ) on Wednesday November 23, 2011 @08:44PM (#38154418) Homepage

        First issue: This is great if you have an external system to log to - if not, you're boned. This new logging system seems to cover both cases.

        No, it doesn't: it does not protect you if you do not log to a another server or at least backup the hashes somewhere else. You still need a secondary server.

        Second issue: One of the big reasons for doing this is to be able to detect when the log has been altered to cover a crackers tracks. Obviously, a deleted log file is easily detected and a big indicator that your system has been compromised, so I'm not seeing your point here.

        Well, I was making a rather broad stroke on that one. As I explained earlier, just like with git rebase you can certainly tamper with the logs without being detected, if you are root, so this doesn't cover that case unless (again) you use a secondary server.

        Third issue: As has been stated above, you can log to both the Journal and good old text based log files. That way you can still use your existing tools on the text file while still being notified of log file alteration. I agree that a common format for log entries would be nice but may not be possible since not every application logs the same kind of data. Note also that this proposal allows for arbitrary key/value pairs so some standard conventions will probably come about after its been used for a while.

        Somebody else answered to this, but yeah: if you're going to file to logfiles anyways, why bother with the journal?

        Fourth issue: Not sure I understand what you are talking about here... Obviously, backward compatibility will have to be taken into account by the devs. You should be able to read the files on other machines if you backed up your encryption keys, etc. (you do backup that stuff right?). By reading the articles, it sounds like the devs have thought about these issues and/or they have already been raised by others. They seem to be fairly easy to deal with.

        Backward compatibility doesn't seem to have been taken into account by the devs. It's in the FAQ:

        Will the journal file format be standardized? Where can I find an explanation of the on-disk data structures?
        At this point we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly. The access is granted by a shared library and a command line tool. (But then again, it’s Free Software, so you can always read the source code!)

        I'm not necessarily on board with this proposed system either, but your issues seem like they've already been covered by the proposed design.

        I disagree with this analysis. :)

    • by Alsee ( 515537 )

      1.
      2.
      2.
      3.

      The entire argument is moot when your log file is corrupt.

      -

    • I'll just add there are 2 common logging formats proposed to fix the formatting issue.
      One is CEE and the other is CEF. Can Google em up :)

      Systemd-journal's formatting is certainly horrific.

    • Remember the last attempt to have some "binary hard to read" format designed to "be more secure"?
      Hello wtmp horror :-)

      Thanks god Zap3 (which is a "hack" tool) can wipe or edit entries at will, better than the non-existing tools to manage it ;-)

  • Absurd (Score:5, Insightful)

    by Anonymous Coward on Wednesday November 23, 2011 @05:49PM (#38152858)

    From the FAQ:

    we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly.

    Not only does it generate logfiles that are not human-readable, they're also in a format that in two years not even their own tool will be able to read. If it is still around in two years, which I doubt.

  • "journal all entries are cryptographically hashed along with the hash of the previous entry in the file. This results in a chain of entries, where each entry authenticates all previous ones. If the top-most hash is regularly saved to a secure write-only location, the full chain is authenticated by it." (emphasis mine)

    Nice security, erm, feature..?

    • I'm thinking that maybe this isn't a typo, and its intended to just avoid reading. But how would you prevent it from being overwritten, thus producing a new, rebuilt, forged but apparently cryptographically correct hashed logfile?
  • GNOME 3 crack (Score:5, Insightful)

    by David Gerard ( 12369 ) <slashdot.davidgerard@co@uk> on Wednesday November 23, 2011 @07:06PM (#38153674) Homepage

    This is on the same crack as the rest of GNOME 3. They've invented the Windows event log, well done! Now I hand you a trashed system, but you can read the disk. You look into /var/log/syslog ... no, you don't. "We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly. The access is granted by a shared library and a command line tool."

    Speaking as a sysadmin, I shudder at this incredibly stupid idea. Are they even thinking of how to get something actually readable in disaster?

  • Serioulsy? (Score:5, Insightful)

    by cdukes ( 709042 ) on Wednesday November 23, 2011 @07:32PM (#38153872) Homepage

    Is this a joke? Or is it someone just trying to push their ideology of what they think should be done to the rest of the world to make their idea a standard?

    Doing something like this would be a sure way for Linux to shoot itself in the foot. For evidence, one only needs to look as far as Microsoft who insists on doing it their special way and expecting everyone else to do what they deem as "good". The concept of syslog messages are that they are meant to be 'open' so disparate systems can read the data. How to you propose to integrate with large syslog reporting/analysis tools like LogZilla (http://www.logzilla.pro)?

    The authors are correct that a format needs to be written so that parsing is easier. But how is their solution any "easier"? Instead, there is a much more effective solution available known as CEE (http://cee.mitre.org/) that proposes to include fields in the text.

    > Syslog data is not authenticated.
    If you need that, then use TLS/certificates. when logging to a centralized host.

    >Syslog is only one of many logging systems on a Linux machine.
    Surely you're aware of syslog-ng and rsyslog.

    Access control to the syslogs is non-existent.
    > To locally stored logs? Maybe (if you don't chown them to root?)
    > But, if you are using syslog-ng or rsyslog and sending to a centralized host., then what is "local" to the system becomes irrelevant.

    Disk usage limits are only applied at fixed intervals, leaving systems vulnerable to DDoS attacks.
    > Again, a moot point if admins are doing it correctly by centralizing with tools like syslog-ng, rsyslog and LogZilla.

    >"For example, the recent, much discussed kernel.org intrusion involved log file manipulation which was only detected by chance."
    Oh, you mean they weren't managing their syslog properly so they got screwed and blamed their lack of management on the protocol itself. Ok, yeah, that makes sense.

    They also noted in their paper that " In a later version we plan to extend the journal minimally to support live remote logging, in both PUSH and PULL modes always using a local journal as buffer for a store-and-forward logic"
    I can't understand how this would be an afterthought. They are clearly thinking "locally" rather than globally. Plus, if it is to eventually be able to send, what format will it use? Text? Ok, now they are back to their original complaint.

    All of this really just makes me cringe. If RH/Fedora do this, there is no way for people that manage large system infrastructures to include those systems in their management. I am responsible for managing over 8,000 Cisco devices on top of several hundred linux systems. Am I supposed to log on to each linux server to get log information?

  • It seems pointless. If somebody already has enough privileges on your server to mess with the logs, how is a hash going to help? There's a whole bunch of things an attacker can do that makes this useless.

    Most obviously, they can corrupt or erase the contents of the file. Noticeable, but the traces of you accessing can be deleted, so that the admin can't figure out who did it.

    The attacker can save an old file, do whatever needs hiding, and replace the file with the old copy. Depending on how it works this ma

  • by tlambert ( 566799 ) on Wednesday November 23, 2011 @07:52PM (#38154038)

    If only his had been done before... Oh, it has. It's called "asl".

    http://opensource.apple.com/source/syslog/syslog-132/ [apple.com]

    -- Terry

  • I'd like to see an OS where all logging is records in a database. With encryption and access control, and replication to remote instances.

    • by alcourt ( 198386 )

      Replication is in syslog if you want.

      If you really want my wish list for syslog, it is to reexamine the facilities. We don't need uucp or news any longer, but it would be nice to offer support for extended facilities.

      Beyond that, everything I've ever found myself wanting is already in syslog-ng but one. I want a program, distributed with any log daemon, that will parse the configuration file and if given a message string, program or facility.priority pair, will tell me all destinations it would go to. So

  • Dubious project... (Score:4, Insightful)

    by Junta ( 36770 ) on Thursday November 24, 2011 @10:47AM (#38157908)

    If we were to accept a binary format, then at least it shouldn't be from a group that says up front:

    At this point we have no intention to standardize the format and we take the liberty to alter it as we see fit. We might document the on-disk format eventually, but at this point we don’t want any other software to read, write or manipulate our journal files directly. ... we don’t want any other software to read, write or manipulate our journal files directly

    This is absolutely unacceptable for projects in *nix land intending to serve such a central role as logging.

    Reading the actual original document, I don't think it focuses so much on security. But to the extent it does, it's pretty pointless. They make noise about an authenticated chain of entries so you can't just modify the middle, *but* that provides no benefit as the attacker can then just rebuild the chain from that point forward. Their answer is to send it to some place that cannot be modified once transmitted. This is exactly the same as remote syslog policies, no additional security, but added complexity for no gain.

    Additionally, they *could* have a system with plaintext and a binary format in place and I recommend they change their minds to do so. The binary blob can contain offsets into a corresponding text file. Thus the good old unix way (which the systemd people seem intent on destroying) is preserved while at the same time get their enhancements.

    They *do* have some valid points. Syslog can't cope with binary data, it doesn't provide a good per-user logging facility, large text files are hard to search, and syslog has insufficient service/event type facilities making complex analysis a requirement in some scenarios. Even in a simplistic case, I have been left at a loss for 'what string *should* I grep for?' Many services ignore syslog because of it's limitations as pointed out in the artcile, making things that much more complicated.

    But at the exact same time they bemoan so many services doing different logging, they propose making yet another facility and recommend keeping rsyslog running because they aren't going to handle syslog messages. They tell people 'tough you have to use systemd' and 'tough you must use our logging'.

    They dismiss java-style namespace management due to variable width, which I think is just going *too* far to acheive theoretical performance gains. They get *very* defensive about UUIDs, and I accept when managed correctly they are unique, *but* it adds a layer of obfuscation unless you have a central coordinating master map of UUID to actual usable names. Uniqueness is an insufficient criteria. Have both worlds. An application submits a message with both a human-readable namespace *and* a UUID. If your logging facility already has the UUID, ignore the namespace. If your hash table does not have that UUID, store a mapping between the UUID and namespace. Then your tool has the added bonus of having a way to dump a quick list of currently observed message types to search by.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...