×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Ask Slashdot: Best Software For Image Organization?

rongten Re:Software doesn't really matter (259 comments)

Hi there,

  For archiving purposes, it is best to never touch the original files. It helps when you have thousands of files and during the years you have made backups on different places/disks.

When you consolidate (because either you consolidate or you lose your photos/memories) if you have photos that differ only for the exif tags is a nightmare to understand which photos are ok and which are not.

Always prefer programs that do not touch your photos. I recently found that one of the programs I used in the past for an old camera (2002-2005), when rotating the images was nuking the exif data.. Still need to find which one it was.. and damn it to hell.

Now it would be great to do .xmp of jpegs, but last time I tried (a few months ago) I did not manage to make it work with shotwell (there is only an option to alter the file metadata.. the horror..).

In my case, to consolidate the photo collection, I have the originals in different folders (tematic, cronological etc. etc) and then I create some symlinks in a directory called "history". Here a work in progress


#!/volume1/homes/admin/local_programs/bin/bash
#set -x
EXT="jpg JPG jpeg JPEG"
#DEBUG="echo"
num=0
for exte in $EXT
do
    for file in $(find . -name '*'.$exte| grep -v history); do
        echo "doing $file"
        OCDATE=$CDATE
        OCHOUR=$(echo $CHOUR | awk -F'.estim' '{print $1}')
        INFO=$(exiftool $file | tr '\n' '#')
        PROBLEM=$(echo $INFO |tr '#' '\n' | grep "^Make")
        [ -z "$PROBLEM" ] && echo "Problem with $file. Skipping" && continue
        CDATE=$(echo $INFO |tr '#' '\n' | grep "Media Create Date" | awk '{print $5}')
        [ -z "$CDATE" ] && CDATE=$(echo $INFO |tr '#' '\n' | grep "Create Date" | awk '{print $4}')
        [ -z "$CDATE" ] && CDATE=$(echo $INFO |tr '#' '\n' | grep "Date/Time Original" | awk '{print $5}')
        [ -z "$CDATE" ] && CDATE=$OCDATE
        [ -z "$CDATE" ] && echo "error inquiry file" $file && continue
        CHOUR=$(echo $INFO |tr '#' '\n' | grep "Media Create Date" | awk '{print $6}')
        [ -z "$CHOUR" ] && CHOUR=$(echo $INFO |tr '#' '\n' | grep "Create Date" | awk '{print $5}')
        [ -z "$CHOUR" ] && CHOUR=$(echo $INFO |tr '#' '\n' | grep "Date/Time Original" | awk '{print $5}')
        [ -z "$CHOUR" ] && num=$(expr $num + 1) && CHOUR=${OCHOUR}.estimation_$num
        [ -z "$CHOUR" ] && echo "error inquiry file" $file && continue
        TYPE=$( echo $INFO |tr '#' '\n' | grep "File Type" | awk '{print $4}')
        YEAR=$( echo $CDATE | cut -d':' -f1)
        MONTH=$(echo $CDATE | cut -d':' -f2)
        DAY=$( echo $CDATE | cut -d':' -f3)
        FNAME=$(echo $CHOUR | tr ':' '-')
        FNAME=${FNAME}.$TYPE
        DDIR=history/$YEAR/$MONTH
        DEST=${DDIR}/${DAY}-${FNAME}
        [ ! -d "$DDIR" ] && $DEBUG mkdir -p $DDIR
        if [ ! -L "$DEST" ]; then
            $DEBUG ln -s ./../../../$file $DEST
        else
            TGT=$(readlink $DEST)
            [ "$TGT" != "./../../../$file" ] && echo "Error whith $file and $DEST" && exit 1
        fi
    done
done

In this way, if your data is on a nas, you can export it to kodi or other clients and you do not need to "re-tag" all over again.

I guess you could do the same with tags (people - events) with shotwell, and then export the associations and build similar simlinks.

This is not very elegant but it allows to find problems and it is very portable: if shotwell database becomes corrupted, you do not lose anything...

about a week ago
top

Debian Talks About Systemd Once Again

rongten Systemd seems fine to me at this stage (522 comments)

Hello,

  I have deployed some fedora 20 machines in the last 3-4 months, and so far I did not see anything that led me to cry foul against systemd.

  Actually, the handling of the user sessions for house-keeping purposes seems much simpler now.

  So I don't get all this hate. Maybe I did not look deep enough, time will tell.

  Cheers

about 2 months ago
top

When Customer Dissatisfaction Is a Tech Business Model

rongten Re:Fleeing abusive companies? (257 comments)

Here in Belgium a registered snail mail is sufficient in general to cancel a service (i.e. cable).
Last time I changed internet provider I waited the expiration of the contract, but I think now they have more consumer friendly laws and you can change with much more ease.

The general idea is to foster competition between companies making it easier for a customer jumping ship and woting with his wallet/her purse.

Of course other governamental intervention (forcing the old telecom monopoly to lease their infrastructure at reasonable price and now trying to do the same for cable) is a godsent.

You can always argue that the incumbent has the advantage (because you may want to avoid the ping pong between the virtual operator and the incumbent), but sure as hell it looks infinitely better of what people have suffering in USA.

I got friends going to work there and being flabbergasted by the internet connections and prices...

about 4 months ago
top

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

rongten Re:Good grief (98 comments)

Exactly the last point.

  What I dislike the most are users that take advantage of others due to their lack of knowledge. And this is either done intentionally or unintentionally when rules are not enforced.

I would like all the students (often coming in contact with linux, shell programming and clusters for the first time) to have a fair shot of using the available resources, and not to backstab each other.

  Before everyone could run on the cluster, until I discovered that certain students were giving their login to others: the first did not really need it (i.e. theoretical work) and the second would run on the cluster twice the amount of jobs of the others.

about 4 months ago
top

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

rongten Re:Platform LSF (98 comments)

Hi,

  another alternative would maybe sysfera-ds, but their open source offering seems lacking documentation and features (see here).

  Need to investigate. Seems something on the lines of what vizstack could have done.

about 4 months ago
top

Ask Slashdot: Linux Login and Resource Management In a Computer Lab?

rongten Re:Just deal with problem users individually. (98 comments)

Hi,

  the beowulf clusters we have are running either based on Centos or SLES. For the development workstations where newer versions of certain software are needed I install Fedora.

  This means the developers basically run production on the cluster and develop on the workstations.

  Since there is always a gap between the two (i.e. centos 5 on cluster and fedora 16 on workstations before, centos 6 on cluster and fedora 20 on workstation), when the cluster is updated there is limited breakage, at least until now.

  I understand those that push a stable distro everywhere, maybe for next cycle I will do the same, who knows.

about 4 months ago
top

China Going Up and Coming Down

rongten Re:Safety? (400 comments)

You start dancing with the sharks and *AA agents.

more than 9 years ago

Submissions

top

Ask Slashdot: Linux Login and Resource Management/Restriction in a Computer Lab

rongten rongten writes  |  about 5 months ago

rongten (756490) writes "I am managing a computer lab composed of various kind of Linux workstations, from small desktops to powerful workstations with plenty of ram and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past the powerful workstations were reseved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks.
I ask slashdort, is there a sort of resource management that would permit: to forbid a same user to log graphically more than once (like UserLock), to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines or even worse running in parallel), to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage), to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine).
The system being put in place uses Fedora 20, ldap PAM authentication, it is puppet managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, queuing system.
But it is not an elegant solution and it is hacked a lot.
Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it.
Does any of you know of a similar system, preferably opensource? A commercial solution could be acceptable as well."

Journals

rongten has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?