Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Syncing Options for Computer Lab Machines?

Cliff posted more than 10 years ago | from the keeping-control-over-the-hardware dept.

Operating Systems 60

sirfunk asks: "I'm going to begin helping out maintaining the computer labs around my university campus. I was wondering what solutions the Slashdot community had hints and tips for maintaining computer lab networks. We need a solution where we can keep a remote image on a server, and the computers will update to that on bootup. We also need them to be able to update, even if Windows is severely messed up (so if Windows dies, just reboot it). I know there's commercial solutions like Deep Freeze, but I was hoping someone knew of a creative Open Source alternative. I'd love if we could run these as dumb terminals with *nix, however that won't be an option for the general public. One Idea I had was to make the machines boot into a Linux partition that would rsync a FAT filesystem (the update) and then reboot to that FAT filesystem. The whole thing about getting it to boot into Linux first and then Windows next might be tricky. I would love to hear everyone's ideas on this topic. If you have any ideas that would run cross-platform (Mac/Windows) that would be great, too."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


You only need RDP terminals (2, Interesting)

Dancin_Santa (265275) | more than 10 years ago | (#7326988)

For far less than the price of a real desktop, you can get a Windows Thin Client [windowsfordevices.com] that will work and play well with your NT servers.

For a lab, you may even be able to get volume pricing.

Re:You only need RDP terminals (2, Informative)

JVert (578547) | more than 10 years ago | (#7331272)

$350 for a thin client plus $200 for the RDP license (1 CAL and 1 RDP CAL). Plus they still modify files on the computer. Now it just takes one talented induhvidual person that can screw up the server.

Re:You only need RDP terminals (2, Insightful)

DrZaius (6588) | more than 10 years ago | (#7334692)

Even cheaper -- install redhat or some other linux and have it start rdesktop as the window manager -- you'll get a windows login every time you hit ctrl-alt-backspace ...

loadwin (0)

Anonymous Coward | more than 10 years ago | (#7327027)

sounds like you would need something like loadlin but to start windows from linux. every time boot linux, rsync, start windows.

Re-Imaging (3, Informative)

NeonSpirit (530024) | more than 10 years ago | (#7327038)

If I understand the situation correctly then you want to re-image each machine on boot. I have looked at this and a complete XP Pro image on a Gb network takes anything from 20 - 45 mins. This is using a product called Altiris Deployment Server [altiris.com]which uses PXE under the covers. If this is acceptable then I'm sure you could do your own PXE solution with a Linux DHCP and TFTP server. You can download a free 30 day eval to see how it works and "clone" the procedure.

No way in hell. (-1)

Jonathan Hamilton (221) | more than 10 years ago | (#7342539)

There is no way in hell on a Gigabyte network it takes 20-45 minutes for a just XP.

Where I work at the School of Communications @ FSU. (Comm.fsu.edu) we use Norton Ghost over a 100Mbit network. Win2k, SPSS Pro, Adobe, Microsoft office, and just about every other academic program you can thing of is on the partition and it might take 15 minuts.

We start ghosting machine(s) and go out and have a smoke, and by the time we get back there done.
Even on a slower 10mbit network we have it only takes about 30 minutes.

If your on a gigabite network, you can transfer at least 1 Gig a minute.

Re:No way in hell. (1)

NeonSpirit (530024) | more than 10 years ago | (#7346613)

The machines have XP pro and Office XP installed, plus a few other minor apps. I may not have made myself clear, thge entire imagiung process takes this time. With Altiris there are a fair number of re-boots in the imaging cycle, to NI with the associated DHCP and PXE downloads. Also Altiris boots to DOS with a generic network device driver which is probaly not the most efficient and may not negotiate correctly to 1000Gb full-duplex.

Re:Re-Imaging (0)

Anonymous Coward | more than 10 years ago | (#7379662)

Surely you'd reimage from a second partition on disk and only update that image from time to time.

some goooooogling (1)

antoinjapan (450229) | more than 10 years ago | (#7327055)

in japan and it seems like america is still asleep and in a good mood so I'll search for you.....
I thought I read once that ghost creates its own partition and then boots to that and downloads the image. So booting a minimal install of linux mightn't be much different...so.... Ghost for Unix [feyrer.de]
Something called system imager [systemimager.org]
A thread about ghost alternatives for linux [cantech.net.au]
cluster cloner [sourceforge.net]
cluster cloner [sourceforge.net]
tired of a href'n:
http://www.jpartner.com/documentation/platform/lin ux/ghost2.htm xpt.sourceforge.net/techdocs/linuxdisk/Tools/linux disk09.000.html+ghost+linux&hl=en&start=5&ie=UTF-8
dunno if any of those help or not

Altiris (4, Informative)

MImeKillEr (445828) | more than 10 years ago | (#7327138)

When I worked in support (last gig was supporting internal classrooms) we used Altiris LabExpert. They've changed the name to Application Management [altiris.com], but this may be what you want. It's not open-source, but comparing this program's prices to the other similar ones on the market, we saved a TON of money (one vendor wanted nearly $150K for all the computers we were going to use this on. I think we spent $7K at each site for a total of $28K)

It has a server and client modules. The clients sync with the server on reboot. If there are jobs in the queue, the server pushes the jobs, they're applied and rebooted.

To create jobs, you make a baseline of an OS, install the application, and then run the baseline app again. The application examines the entire disk as well as the registry and notes changes. You build a package containing just the changes.

You can even turn the packages into self-extracting .EXEs, burn to CD and deliver that way.

easyeverything (1, Interesting)

Anonymous Coward | more than 10 years ago | (#7327161)

The UK's easyEverything [easyeverything.com] (or easyInternetCafe, or whatever they call themselves) runs large internet cafes (up to 1000 pcs - I think the one in Time Square in NY is the biggest), and every time a user logs out of the pc it reboots and re-images itself. They use a commercial product [rembo.com] for that though. And its windows 95 (!) so I'm sure they have a pretty tiny image.

Don't Knock It (5, Informative)

yancey (136972) | more than 10 years ago | (#7327194)

It seems like you are pro-open-source, but don't dismiss the commercial products completely. Novell's ZENworks for Desktops (ZfD) product is quite simply amazing! It also happens to do exactly what you're talking about.

Does it require Novell servers? No, it does not. You can read more from the ZENworks documentation [novell.com] at Novell's website. Read the ZENworks 4 docs. ZENworks 6 is a bundle of ZENworks 4 for Desktops and ZENworks for Servers and ZENworks for Handhelds.

I once read about a university (I think in the UK) that managed 30,000 Windows desktops with only six people! Also, the largest companies on the planet tend to favor ZENworks for Desktops over SMS [ntbugtraq.com] for deploying patches.

My computer support group uses ZfD to manage about 1,500 computers whose configurations vary widely from P2-400's to P4-3.06 Ghz boxes running anything from Win98 to WinXP. About 400 machines are in labs, but the rest are faculty or staff desktops. ZfD is extremely flexible. ZfD has an imaging solution, but is not limited to that.

ZfD imaging boots up a Linux agent first, either from the hard disk or by booting it over the network from the ZfD server or from a bootable CD-ROM. This agent checks Novell eDirectory to see what it should do (store an image of this workstation on the server, install an image onto the workstation, or other tasks). Once the image has been transferred, the computer reboots into Windows. Each time the computer boots, ZfD will check to see if it should perform an imaging task; if not, then it just boots Windows. ZfD can also add software to the base image on-the-fly!

Alternately, you can automate an install of Windows (just the base OS, with patches). Then install the ZfD agent and let it install all the other software for you. This solution is the ultimate in flexiblity, but requires you to have a pretty intimate knowledge of how Windows and ZENworks function, like what registry entries are dangerous to deploy to other workstations.

A combination of imaging and software deployment is an excellent way to get a workstation installed quickly and have a large selection of software available. You can deploy a small image (Windows, ZfD agent) and allow the ZfD agent to install other software as needed by the users. For example, ZfD can put items on the Start menu and when the user clicks on that item for the first time, ZfD installs the software. Rarely does one need to reboot.

ZENworks is probably the best solution available for managing large numbers of Windows desktops. It is powerful and flexible. Like many powerful tools, it is also a double-edged sword. It can easily deploy a patch and fix thousands of workstations, but if you deploy the wrong registry entry, you can just as easily break thousands of workstations. This is why you have to know Windows inside and out.

Finally, Novell has really good discounts for education. If you don't already have it available to you, check into it.

Unison (2, Informative)

jungd (223367) | more than 10 years ago | (#7327452)

Check out Unison [upenn.edu]. Not sure if it is exactly what you want, but it is a nice cross-platform filesystem sync tool I use.

Re:Unison (1)

buttahead (266220) | more than 10 years ago | (#7335681)

yeah... the problem using this for his solution is that if windows gets to acting strange, unison (is there a windows port?) may not work properly. Also, I believe windows is pretty strict about overwriting system critical files while they are in use, so uniso would fail for a full system sync after booting into windows.

also, unison can be a little tedious when trying to merge thousands of files... without an interactive session it is verrry difficult :(

infrastuctures.org (1)

ghostis (165022) | more than 10 years ago | (#7327500)

has documentation on the theory behind keeping multiple systems up to date. Most of their work has been Unix oriented, but the concepts they have developed are broadly applicable.


why image? (2, Interesting)

gizmo_mathboy (43426) | more than 10 years ago | (#7327516)

Actually, how "close" are the images, network-wise? As another has noted, it will take a long time do the image.

In my labs we just deploy the machine and update the software remotely as needed. Sure, we should redeploy once or twice a year to clear out the cruft that builds up ove a semester. But I think it beats re-imaging on every boot.

A good question is how much are you imaging? That could save some time.

Of course, that's just my opinion I could be wrong.

What are these machines doing? (1, Interesting)

aridhol (112307) | more than 10 years ago | (#7327605)

I'd love if we could run these as dumb terminals with *nix, however that won't be an option for the general public.
Why not? What are these machines doing that makes Windows absolutely irreplacable? Decide what apps will be running on these machines. Since they're university computers, they probably won't be running games. Exchanging Office documents? If everybody in the university uses OpenOffice, that limits the requirements for MS Office to out-of-uni work. A few, limited-access, machines could be used to fix up MS Office <-> Open Office transfers.

How about an image on an 2nd partition? (4, Informative)

pbulteel73 (559845) | more than 10 years ago | (#7327672)

You could always have a partition saved on the a 2nd hidden partition and recover from there. That would make it a LOT faster than trying to go through the network. The LG Internet fridge recovers it's win98 partition and resets itself by doing this. (No I don't have one - they're $8000.)

I don't know what tools they use for this though, but dd should work. This is also how some companies use to have the recovery information for their desktops. If you used your rescue CD, it would revover from that hidden partition.

Anyway, just a though...


Re:How about an image on an 2nd partition? (1)

MarcQuadra (129430) | more than 10 years ago | (#7335130)

That's what we do at URI. Most of our lab workstations had 10GB local drives, and our image was about 2GB in ghost high-compression, we stored the image on a second partition, and each morning at 7AM they'd wake up and if there time was right they'd reimage.

Pushing updates was hard because they wouldn't let me have any server access. I had to do it manually. If I DID have server access though I'd store the image on the server, have a cronjob MD5 it, and the workstations would compare MD5sums with the server at staggered times and copy the new image if they were different.

rsync doesn't need *nix (3, Informative)

cloudmaster (10662) | more than 10 years ago | (#7327700)

You can install rsync for windows, which is easily done using cygwin. Write a little shell script (since you're pro *nix) and set it up to run on boot. That oughtta be fairly easy.

When I was working computer labs, my preferred solution was linux + vmware, BTW. The machines ran linux (with everything mounted read-only - I'd netboot if I did it again), started up X, and then fired up a VMWare instance that ran full screen. The virtual disk image was on a remote machine (though it could just as easily be pushed to the client machines when it was updated), and was opened read-only on the clients. If anything happened, they'd just "restart", which just threw away any local changes that they'd made. It was great for the net admin classes, as we could give the users full control of the windows machine without worry of them actually screwing anything up. Also, you can update the install at any time by simply opening the disk image with "save changes" enabled. If you set the file system permissions so that normal users can't write to the image even if they do manage to change the vmware settings, you're pretty well set.

Granted, it costs some money, but it works real well - if you don't need direct hardware access to devices not supported by the host OS. That's the VMWare solution's catch - not all hardware is perfectly supported by linux, and using Win32 as a host is rather pointless. :(

Re:rsync doesn't need *nix (1)

buttahead (266220) | more than 10 years ago | (#7335665)

rsync is great but it looks like he is going for more a of a PXE or NetBoot solution as pointed out by his post:

computers will update to that on bootup and be able to update, even if Windows is severely messed up

rsync is only usable from userland, but I suppose if you had no other solution, you could :

1) install an initrd on the box,
2) boot into that,
3) rsync the image to a different partition on disk, and
4) pivot_root into that partition.

Not Windows, but Linux... (1)

Asprin (545477) | more than 10 years ago | (#7327720)

Have you looked at Partition Image [partimage.org]? The NTFS support is still 'experimental', but it can load images over a network from a server. I don't know if it can boot them or not, but it's open-source, so I'm sure you can get some kind of help from the developers toward adding that sort of capability yourself. Then, you'd just need to make a set of bootable CDs that run the partimage client and automatically rewrite the hard drive with the correct image. Shoot -- if you put 2GB of RAM in them, would it be possible to go diskless and load everything onto a RAM Drive? That way, the PC rewipes itself every reboot and you might even get a kick in performance if the disk accesses don't clog the memory bus too badly.

Now, this probably doesn't help because you are looking for a Windows setup, but if you needed Linux, what about rolling your own customized version of Knoppix [knopper.net]?

IIRC, the latest versions support network booting from hosted images, and several [knoppix-std.org] others [gnoppix.org] have [octeams.com] taken [linuxtag.net] Knoppix and tweaked it with various different hardware support and software changes (Overclockix, for example, adds stuff like support for NVidia's NForce2 chipset using NVidia's Linux drivers, which Knoppix won't include because of the licensing terms.)

...though, on second thought. I suppose if you were willing to go through all that trouble, you might just be willing to host the /usr tree read-only from your server -- that would do about the same thing.

Re:Not Windows, but Linux... (Partimage(d)) (2, Interesting)

lptp (455011) | more than 10 years ago | (#7328948)

Right now, I have a "partimage" solution we use to reinstall our PC rooms (115 PC's right now) in a similar way to what's asked for in the originating post.

Complete picture:
+ PC boots, loads linux from network (PXE boot)
+ Linux does an fdisk, start partimage and restores original image

Overnight reinstalls are necessary 'cause we want to give students total freedom on the machine

Two problems:
1) found no way to boot windows from a running Linux so far. Temporary solution is a reboot, having the DHCP server change it's PXE options OR alter the file that decides whether a Linux networkboot or /dev/hda1 will happen (syslinux variant, PXELinux used, so changing the PXElinux configuration file for a certain MAC will do that trick)

2) Partimaged, the serverside program that offers the images on the network, is pretty much crippled by its limitation to 10 (15 now?) simultaneous clients. This makes it impossible to update all PC's in a single room (up to 30) at once, even though the server capacity is up to it. Tried to run multiple instances of partimage on different listening ports, but this crashes partimaged...

Improvement would be a good thing here, so I'll be watching this thread closely.

Nightly, network boot, or NT (1)

bluGill (862) | more than 10 years ago | (#7327874)

Windows 9x gives the user root. Anything you do can be bypassed since anyone can write the partition table to wipe out linux and change the default boot partition to the windows one. The only way I know of to get around this is booting off a chip in your network card, your server can either load linux, or tell it to boot off the windows partition, depending on some scripts you set up. Do to the time it takes to re-install I recomend doing this nightly, if you notice a machine that someone has screwed up then put it out of order until the next day.

Because you mentioned FAT I'm assuming you are using something in the 9x series. Forget it, switch to something NT based (2000 or XP), and don't give the users administrator. They then can't do nearly as much damange. IIRC you are about as secure as Unix depending on your administrator skills. Install the software you need, and don't give users write to any local partition. (They need access to floppy, cd-writers, and USB memory cards, I don't know how to set that up) Make them save to a network server so they can get their data from any machine on campus.

Re:Nightly, network boot, or NT (1)

TiggsPanther (611974) | more than 10 years ago | (#7336476)

Where I am we have Win98 machines for the Users. (Not my choice. Setup predates my employment here.)

Yes it's a commercial solution, but we use DeepFreeze to keep the machines locked down. It's very very hard to screw up one of the machines here, and if the worst comes to the worst we can re-clone a machine as a last resort.

The only drawback to this is that if it isn't set up for remote administration, it's a real bugger to install any legitimate upgrades. So minor changes (like adding one shortcut to the desktop) become a major undertaking.
(And automatic antivirus-signatures will automaticcaly revert upon reboot)

roll-your-own idea with rsync (1)

phildog (650210) | more than 10 years ago | (#7327893)

- 2 equal partitions on clients
- use cygwin's rsync to auto update the passive partition
- move folder "os.old" to "os" when rsync complete
- round robin boot between the partitions

this may be a terrible idea, have never tried it

A free solution for the Mac (1)

Cainam (10838) | more than 10 years ago | (#7327961)

Here at Ohio State, we use a free program called RevRdist [purdue.edu] to keep the Mac machines up to date.

Rembo (1)

drsmithy (35869) | more than 10 years ago | (#7327982)

At the Uni I used to work at we used a product called Rembo [rembo.com] and it worked well. It uses multicast to reimage (amongst other useful things), so reimaging an entire lab doesn't bring your network to its knees.

Watch out with Multicast (1)

pyite (140350) | more than 10 years ago | (#7328134)

Make sure you contact whoever handles your networking so you can properly configure multicast on whatever app you use (Ghost, etc.). If not, you're almost bound to kill your upstream router, especially if it's older. It happens at the U where I work fairly frequently. CPU load goes high on a certain router, you check it, and its just flooded with multicast from incorrectly configured applications.

Linux + VMWare + Windows (1)

SysKoll (48967) | more than 10 years ago | (#7328426)

If windows is absolutely irreplaceable, I found the easiest solution was to buy a Linux VMWare license for each machine. Install Win32 in the VMWare environment. Save a snapshot (which is just a large regular Linux file). Copy the snapshot to a server. Restoring the Windows environment is as simple as restarting VMWare from the snapshot. Costs about $300 per machine.

SystemImager v3.0.1 (2, Informative)

bastion (444000) | more than 10 years ago | (#7328622)


SystemImager makes it easy to do automated installs (clones), software distribution, content or data distribution, configuration changes, and operating system updates to your network of Linux machines. You can even update from one Linux release version to another!

It can also be used to ensure safe production deployments. By saving your current production image before updating to your new production image, you have a highly reliable contingency mechanism. If the new production enviroment is found to be flawed, simply roll-back to the last production image with a simple update command!

Some typical environments include: Internet server farms, database server farms, high performance clusters, computer labs, and corporate desktop environments.

You could... (in theory) clone your linux box with the windows partition intact, set your grub.conf to boot windows automatically postinstall. Whereby you update your 'gold image' and redeploy it with patches, etc. Rsync works on win32 but I'm not sure if daemon mode works, this could be alleviated by running a scheduler but alas you would have to script (not completely recommended for security purposes) the patches or software to run/install.

Don't do this (2, Interesting)

CompVisGuy (587118) | more than 10 years ago | (#7328894)

When I was an undergrad, we had machines that were managed like this.

There were two different setups, and I can't tell you what software they used to achieve them, but I can tell you what happened from a user's perspective.

In the first setup (a small lab -- about 20 machines), the machines were setup to automatically replace their installation of Windows once a week at a "convenient time". The problem was, this time was convenient for the sys admins, rather than the users. So, when working on a project out of scheduled lab times, I would often have to wait for about 30 mins to start work while the machine got a fresh copy of Windows. This was even worse if there was more than one person trying to use the machine, as the network would slow down.

The obvious solution to the above problem is to change the time to something like 3am. However, in these days of devastating Windows worms, I don't think it's an option to install a new image once a week. Also, many university computer facilities are open 24/7; you often get students who like to work antisocial hours, so choosing a convenient time is pretty difficult.

The second setup was a more campus-wide solution. I'm not sure how they achieved it, but it seemed that each machine maintained a log of which files were changed while a particular user was logged on. When they logged off, the machine simply returned the disk to the state it had been in before.

There are many problems with doing what you suggest:

+ User ignorance: naive users are used to saving their stuff to C:. If you then overwrite the disk, they will complain about your policy eating their homework.

+ If you have one 'master' disk image, how do you manage the different drivers required for different hardware? It's impossible to maintain a large number of systems with exactly the same hardware (when you consider component failures etc).

I would suggest the following: Use the permissions and management facilities of the OS to prevent users installing their own software or writing to the C: drive etc. Really lock them down. Give each user networked disk space which only they can write to. Make sure that you have an automated way to roll out patches, and keep on top of things. Make sure your virus protection is tip-top. Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).

Further, for each "group" who need to work together (e.g. small groups of final year students who are working on a particularly project), provide a "transfer"area which they can all read and write. For users who need to install their own software (e.g. computer science researchers), establish a small team of sys admins at their location and let them do their own thing -- just make sure they are sufficiently safe behind a firewall so they can't easily shoot themselves in the foot and your managed main network is safe from any of their screw-ups.

Re:Don't do this (1)

SenorAmor (719735) | more than 10 years ago | (#7330179)

The second setup was a more campus-wide solution. I'm not sure how they achieved it, but it seemed that each machine maintained a log of which files were changed while a particular user was logged on. When they logged off, the machine simply returned the disk to the state it had been in before.

Perhaps you're referring to a Centurion Lock [centuriontech.com]?

Re:Don't do this (1)

toast0 (63707) | more than 10 years ago | (#7333138)

regarding groups, it would be even nicer if there was a way for students to create groups themselves, so they don't need to bother the sysadmin, or wait for the sysadmin.

since group projects are pretty big in undergrad these days, it'd be nice if the students could easily have group storage, without having to do it on their one machines (since school run servers tend to be more reliable, easier to connect to, and on a faster connection than student run ones)

How true... (1)

MadAnthony02 (626886) | more than 10 years ago | (#7367153)

Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).

I work at a college, too, and I can't tell you how many times students have walked in with a floppy disk (sometimes physically damaged, ie cracked in half) with the only copy of their paper that's due in an hour.

We've had pretty good luck using BadCopy (from JufSoft) to recover the disks, but sometimes they are too far gone, and students can't understand how it could get damaged since "it worked an hour ago"

We're trying to encourage flash drive use, which brings up more issues, IE how do you let users install hardware on a locked-down machine - especially tricky since every brand of drive seems to act differently, and sometimes tries to grap network drive assignments

The other thing we've started doing is creating Novell Netstorage accounts (shared drives accessible through the internet from anywhere). Good when it works, but most students don't know it exists, plus they have to learn fun novell logins (.username.context)

Not tricky to implement your dual boot solution (2, Informative)

PD (9577) | more than 10 years ago | (#7328907)

Set up Lilo with two targets: Linux and Winders.

Make Linux the default target to boot to.

When you're inside of Linux, and you want to set it so it boots Windows for the next boot, and only the next boot, then you do a

lilo -R windows ; shutdown -r now

The next boot will be into Windows. The boot after that will be back into Linux.

Seems like you could set things up very easily to do what you want.

PQ DeployCenter (1)

1eyedhive (664431) | more than 10 years ago | (#7328928)

i just setup a 29 node XP lab using PowerQuest's Deploy Center, which is basically Driveimage on steroids, create image of o/s using boot disks, saving image to net server make boot disk hardcoded to grab and download image from server, run on all clients. the problem we ran into is the network here is 100 Mbit fiber full dupe backbone, and 10Mbit full dupe UNSWITCHED horizontal runs, the lab workroom was wired into the MDF directly, but all boxes were on 10 mbit line, a 6.2GB image took 9 hours when deployed to 10 boxes at a go (i TOLD the mac-centric admins that it was a royally bad idea...) the labs at other schools use Deep Freeze http://www.deepfreezeusa.com/ which wipes all data added since the last restart, reverting the system to the old config, any persistant data must be stored on a 2nd partition (called THAWSPACE, used to store info on changes and persistant data) or a network share (drive map or otherwise), the davantage is that the entire system is protected, it can be unlocked with a key combination and password if administration is needed. common use: 1. setup host system 2. install deep freeze 3. make ghost (or DL) image 4. deploy image to nodes 5. sit back, relax and stop worrying about winboxes this method of course, costs $$$

PC-Rdist (1)

UnrefinedLayman (185512) | more than 10 years ago | (#7329358)

Check PC-Rdist [pyzzo.com] out. We used them in about five labs to sync about 200-300 PCs running from Windows 95 to 98 to 2000 to XP. It's really fast, works extremely well, and has a lot of options that will let you customize how it runs. For example, if they're computers for students and students are prone to accidentally leaving their files on the PC, you can set it so when it runs it will save all .DOC files less than 1 MB (or whatever size) in a particular folder of your choosing, and after a week of being there they will be deleted.

Remarkably, it's pretty extensible. And it runs (for 2000 and XP) as a startup script through group policy, so there's no getting around it.

At least give it a look.

OS X Server (3, Interesting)

Johnny Mnemonic (176043) | more than 10 years ago | (#7329601)

This probably won't be able to apply to you, but it's worth knowing: Mac OS X Server can do this out of the box (to Mac clients). Apple calls it "NetBoot", and it's been available since at least 2000; I believe the tech came from NeXT originally.

Under OS 9 and 10.3 it allows for clients-without-drives as they get all their OS etc from the server down the wire (10.1, .2 required a HD, but only for swap), which is useful in some secure installations. Read more about it here [apple.com].

Re:OS X Server (1)

kwerle (39371) | more than 10 years ago | (#7334188)

I'm pro-apple and ex-next, but netbooting is hardly NeXT or Apple specific. Just about any unix variant will netboot. I've netbooted nextstep, solaris, and linux.

Just google for netboot bootp and tftp.

Can't see where anyone netboots windows, though...

Re:OS X Server (1)

arete (170676) | more than 10 years ago | (#7334980)

Any mac with smoothly netboot and it's easy to configure. That is not my impression of x86 nix. To be fair, it's not the OS - it's OpenFirmware vs. PC BIOS (both for variety and because it's often less powerful)

Re:OS X Server (1)

Johnny Mnemonic (176043) | more than 10 years ago | (#7335003)

I stand corrected. Interesting, though, that one can netboot OS 9 clients--I guess it's because you've got a Unix Server doing the heavy lifting.

Boot Prom? (1)

!3ren (686818) | more than 10 years ago | (#7330281)

Some network cards support using a Boot Prom, you could boot off a server and copy an image down to the client at that point.
Not so hot in terms of network traffic at 8am, and god-forbid your user saves locally and the machine locks, or your server gets compromised (shudder) but maybe an option if you can get around those hurdles.
Just a suggestion anyways. :)

lilo -R boots to other OS once (2, Informative)

korpiq (8532) | more than 10 years ago | (#7330343)

I'd put something like this into a script (/etc/init.d/restore_windows):


lilo -R windows



shutdown -r now

Is that too simplistic? man lilo for the -R switch.

pay for it.. (1)

Suppafly (179830) | more than 10 years ago | (#7330813)

Just break down and pay for Norton Ghost or a similar program. That way when it doesn't work, you can make them fix it.

Re:pay for it.. (1)

priceb (629287) | more than 10 years ago | (#7331646)

Norton Ghost may be one of the better products on the market today, but it is also the cause of many headaches in my job as a lab manager. Sometimes it works great, and other times it refuses to work. Basicly the moral to this story is just because you spend money on it does not mean it will always work. However, for restoring the condition of the system on every boot, we are using DeepFreeze. It is on of the best investments we ever made.

Already did this. (2, Informative)

transiit (33489) | more than 10 years ago | (#7331670)

I helped a guy out set up this exact FAT32 + rsync setup.

We used Smart Boot Manager [sourceforge.net] and set up scheduled reboots.

Works like a charm. Note that it not only cleans up the machines at the end of each day, it will also allow you to patch your master image and push that out to the network. (even a one-day lag is still faster than going from machine to machine patching or ghosting)

Watch out for oddities such as the Daylight to Standard time switch, though.


Re:Already did this. (1)

kamzik (180000) | more than 10 years ago | (#7333908)

Mod the parent up!

This is exactly the perfect solution (at least this solves my problem perfectly).

Re:Already did this. (1)

TiggsPanther (611974) | more than 10 years ago | (#7336509)

Watch out for oddities such as the Daylight to Standard time switch, though.

This is a bitch with DeepFreeze too. (Though a worthwhile one compared to the havoc Students/Tutors can cause on the PCs).
I was just lucky this week that it coincided with both a no-lessons week and the latest Virus Signature update.

Re:Already did this. (1)

ddent (166525) | more than 10 years ago | (#7368227)

Here is what we do about the DST problem:

1) Machines are set to completely ignore DST updates
2) The samba login scripts has the time sync upon log in, every time.

That keeps the clocks right, and the dialogs down.

Frisbee (1)

jrstewart (46866) | more than 10 years ago | (#7332625)

Check out Frisbee for fast disk imaging.

From the abstract:

Both researchers and operators of production systems are frequently faced with the need to manipulate entire disk images. Convenient and fast tools for saving, transferring, and installing entire disk images make disaster recovery, operating system installation, and many other tasks significantly easier. In a research environment, making such tools available to users greatly encourages experimentation.

We present Frisbee, a system for saving, transferring, and installing entire disk images, whose goals are speed and scalability in a LAN environment. Among the techniques Frisbee uses are an appropriately-adapted method of filesystem-aware compression, a custom applicationlevel reliable multicast protocol, and flexible applicationlevel framing. This design results in a system which can rapidly and reliably distribute a disk image to many clients simultaneously. For example, Frisbee can write a total of 50 gigabytes of data to 80 disks in 34 seconds on commodity PC hardware. We describe Frisbees design and implementation, review important design decisions, and evaluate its performance.

http://www.cs.utah.edu/flux/papers/frisbee-useni x0 3-base.html

pc-rdist (2, Informative)

tangsc (161284) | more than 10 years ago | (#7334911)

We did such a thing to manage 3 computer labs for the college of engineering at a large university. (They deployed it to a couple more labs after I graduated). We used a program called PCRdist. (http://www.pyzzo.com/). It is based off a unix app called rdist. It was great. We used it to manage the different desktops, deploy applications, etc.

A reply to someones comment about work space. When you setup applications, just make sure their default save location is in such a directory (Also, use NTFS to enforce it). Now, you don't touch files from the directory unless files are XXX days or so.

The every-other-reboot thing (1)

tylernt (581794) | more than 10 years ago | (#7335102)

I've done it.

Google for JO.SYS and download the free one some guy wrote. Configure JO.SYS to boot to the hard drive after a 1 second delay. Google for and download int19.com (it makes a PC warm reboot). Put both files on a floppy. Rename JO.SYS to JO.BAK. Configure autoexec.bat on the floppy to do your thing (re-image with Ghost, whatever) and then rename JO.BAK to JO.SYS and then call int19.com.

Finally, configure some kind of startup script on the hard drive to rename a:\JO.SYS to a:\JO.BAK.

Now, every time you reboot it will boot altertately to floppy, then HD, then floppy, then HD, etc.

How? The presence of JO.SYS on a floppy causes a boot to the hard drive. The absence of JO.SYS on the floppy causes a boot to floppy.

This must be used with a Win98 boot floppy to work.

seems simple (1)

aminorex (141494) | more than 10 years ago | (#7335150)

use grub with a UMSDOS boot partition.
have the windows image copy a grub.conf
into place when it boots such that the default
boot partition is the linux UMSDOS partition.
have the linux partition copy a grub.conf
into place when it boots such that the default
boot partition is the windows partition.

What I've seen... (2, Interesting)

DaracMarjal (513394) | more than 10 years ago | (#7336467)

The University of York used to do this idea. The computers would network boot to a small menu system (probably in DOS or something). You could either choose to Boot windows (whereby the Hard Disk was chainloaded) or Rebuild the PC.

Rebuilding the PC downloaded an image from a central server and re-imaged C:

If, however, the menu system noticed the time was after 1:00am and the PC hadn't been rebuilt for 24hours, it would force a rebuild, cleaning up any left over problems.

The system was enforced by removing the Logoff option from windows, requiring users to reboot after a session.

The only problem was, as mentioned above, that if you're working an all nighter on your project, forgetting to save then, when Windows crashes at 5am the rebuild will begin and wipe out your crucial temp files.

I suppose the solutions to this are A) put $TEMP on D:, a non-imaged partition for general junk or B) only re-image if the PC has been idle for a set amount of time (e.g. hold at the menu system for an hour then re-image)

Automatic Ghosting. (1)

Caledai (522776) | more than 10 years ago | (#7382810)

There are two options, Norton Ghost works well for images but takes about 20 minutes and doesn't work on Mac OS. Not sure about Unix/Linux.

However there is hardware out there that will remove any changes to the system on next boot, even if hd is formated. The solution I know of is called ZeroCard. Set the computer up, install the PCI Card, then set it up with password. When you want to change it, do so, restart and boot either of disk or hold down a key and enter the password. (I'm a bit sketchy on the details, its been a while since I looked at it)

Hope this helps.

Frisbee (1)

sysadmn (29788) | more than 10 years ago | (#7386338)

Google 'Frisbee' from the Univ. of Utah. Complete re-imaging in about 30 seconds! It was originally developed to rebuild a cluster used to test network protocols.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account