We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
rongten (756490) writes "I am managing a computer lab composed of various kind of Linux workstations, from small desktops to powerful workstations with plenty of ram and cores. The users' $HOME is NFS mounted, and they either access via console (no user switch allowed), ssh or x2go. In the past the powerful workstations were reseved to certain power users, but now even "regular" students may need to have access to high memory machines for some tasks. I ask slashdort, is there a sort of resource management that would permit: to forbid a same user to log graphically more than once (like UserLock), to limit the amount of ssh sessions (i.e. no user using distcc and spamming the rest of the machines or even worse running in parallel), to give priority to the console user (i.e. automatically renicing remote users jobs and restricting their memory usage), to avoid swapping and waiting (i.e. all the users trying to log into the latest and greatest machine, so have a limited amount of logins proportional to the capacity of the machine). The system being put in place uses Fedora 20, ldap PAM authentication, it is puppet managed, and NFS based. In the past I tried to achieve similar functionality via cron jobs, login scripts, ssh and nx management, queuing system. But it is not an elegant solution and it is hacked a lot. Since I think these requirements should be pretty standard for a computer lab, I am surprised to see that I cannot find something already written for it. Does any of you know of a similar system, preferably opensource? A commercial solution could be acceptable as well."