We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
Yea - finally. After toying with the idea for months, I finally broke the ice and got a Wiki installed.
I have been looking for a knowledge capture and management platform to use at work - something better than 'Mike has some magic SQL scripts to fix that, keeps them in a Word doc on his desktop' to tame the environment (more like 8 environments.)
I looked around, found Wikipedia and thought that it would be a perfect fit (the technology, not posting all my info to wikipedia) - Wiki. As usual, the name alone is making it difficult to get past the Productivity Prevention Team at work - so I took it underground. Ok, maybe I just took it home to work on it here.
Tried TWiki - looked cool, I heard some good things about it if you have Tomcat already running (which I didn't) - but it didn't play nice with WSAD's WAS 5.0 test environment and I didn't feel like figuring it out. Back to Babbages for TWiki.
Hear about MediaWiki a few times over the course of a few months, but I don't pay attention.
Someone mentions Wiki on a Stick (USB Thumbdrive, etc) so I look it up. Looks good, but I don't have time.
Last night I have some time, start playing with it and less than an hour later I have it up and running. It really is a no-brainer, just follow the bouncing ball and it installs on a Windows box really nicely.
It doesn't seem to be a hardened install of anything, but for internal use only it looks like a great implementation. The nice thing is that it is completely removable - just turn it off and remove the Thumbdrive (the upside being that you can start it on pretty much any machine without re-installing or configuring it.) As an added bonus, you get a zero footprint (no.dll's or registry settings to worry about - although you will need to tweak the ip-address it is listening to in some.conf file) install of Apache web server w/ PHP and a MySQL database server - all in under 32M.
And in case it matters - Glenlivit 15 year old, French Oak Reserve.
Don't worry my friends, I didn't forget about you. Been playing with new and exciting ways to apply some of the hardware in my RAIC.
About a week or so ago I found a wicked fast FTP site offering up Novell's latest release of SuSE 9.2 Pro in a nifty Live CD/DVD release. Kick ass! As if running Linux in a VM isn't enough, now I can run it on the bare metal without even touching the hard drive. Pop in the CD, boot the machine, do the workaround* that is necessary on an LCD (the Live version defaults to 85Hz for the monitor which freaks out the LCD.)
I am running it on one of my 400sc machines, works fine with a crap ATI 8M video card, as well as with the GeForce cards in the other boxes. Granted the box also has 2G of RAM and a P4 2.8GHz with HT, but... who's counting?
If you never want to hassle with another spyware laden site, drive by installer of CometCursor or whatever, popups, or hostile web sites but still want to search the web for all your pornographic needs - FireFox on the Live CD that includes Gnome (it comes with Evolution 2.0 and Ximian also... for those that work in an Exchange environment) very nice.
Note that you can do anything you want to the system, change it all up and use it how you like... just understand that when you reboot the box it is going to all be wiped clean.
* Workaround described, cut and pasted from where I found it :
- switch to a text console via Ctrl-Alt-F2 - go to run level 3 via "init 3" - run "sax2 --vesa 0:1280x1024@60", configure the display... - go back to runlevel 5 via "init 5"
Note if your LCD is a 1024x768 then adjust the third line accordingly. If your LCD is a 640x480, probably time to buy a new display.
Glonoinha writes | more than 9 years ago
Ok - the quest for higher knowledge and enlightenment continues. Not.
Actually I'm just looking to attain nerdvana - the fastest computing environment I can possibly manage.
This week Santa brought me exactly what I was hoping for... six 512M DDR pc3200 sticks of memory for my mini-cluster. Two of the machines got bumped to 2G, one got a bump to 1.25G and the designated file server stayed at a pitiful 512M. Now WTF do I do with it (see previous question)?
RamDrive, of course. I downloaded the demo version of SuperSpeed's ramdrive package - it installs at the driver level and looks like just another piece of hardware. 45 Day trial period ought to be plenty. I installed it on the two machines with 2G apiece and broke out the stopwatches - figured that maybe with solid state drives I could finally realize the full potential of the gigabit backbone I put in last month. Two machines get configured, each with a 1.75G ramdrive, and out come the benchmark tests.
Initial feedback : positive, but not unGodly.
When staying on the same machine, copying files of any size from a hard drive to the SSD is still limited by the read throughput of the hard drive - in this case about 35-38 megabytes per second. This isn't bad, and is a HELL of a lot faster than when copying a file to a new place on the same hard drive.
When copying a single big file (800M) to another place on the same SSD things start to heat up - 220 - 240 megabyte/sec range. Given it is reading and writing on the same bus (full duplex) that's close to 500MB/s bandwidth on the memory bus.
I still haven't come up with anything that can benchmark peak read or write throughput, although Nero is reporting my hard drive at 55MB/s and the ramdrive at 1,392MB/s... for what that's worth.
Moving a big file (1.5G) over the network from one ramdrive to another was a little disappointing - 53MB/s. Over a GigE back end (Intel integrated NIC's on PowerEdge400sc machines, Netgear 5 port GigE switch, hand crimped (by me) cables using Cat5 hardware.) Using regular file copy (command line in a DOS window to avoid any GUI overhead.) I was expecting quite a bit better than this - not sure what to think, where the bottleneck is...
On one machine I have moved the IE (yea, I know...) temporary files onto the ramdrive and IE seems snappier, faster to render the pages (note that I'm on a 3Mb/s connection so...) On pages with zillions of small files / graphic files it seems to make a difference. Nothing notable via numion.com however.
Stay tuned for more insights... same Bat Time... same Bat Channel.
What would you do with a 30GHz+ supercomputer cluster?
Glonoinha writes | more than 10 years ago
Yes - I'm soliciting your feedback.
March 16, 1999 : IBM raises the bar on supercomputing, matching the record for rendering a POV raytracing in three seconds - previously set by a $5.5M Cray supercomputer. The record was matched using a 17 machine cluster costing roughly $150,000. Read about it here.
Today you can buy a single 3.2GHz Hyperthreaded P4 machine that can run the same POV benchpark in the same 3 seconds, cost you about $750.
Today Dell is running a sale, they are blowing out their 400sc series servers. $350 for a 2.8GHz Hyperthreaded P4 with 128M of RAM, a 48x CD, 40G IDE drive, and integrated Intel Gigabit networking. Dell Small Business, just look up the 400sc. Figure a grand total of $400 by the time you add in some aftermarket memory.
A few weeks ago/. profiled an AlienWare machine for about $5k - let us consider what kind of environment we can get for that same $5k : how about 11 of these machines with 512M of RAM each, a gigabit switch for the backbone, cables, and a nice 18" LCD monitor for the primary machine. That's 30GHz of CPU, plus the additional performance from the hyperthreaded CPUs - quite a monster of a cluster. Five years ago this system would have dominated the Top500 list. Maybe it wouldn't have taken the top spot, but surely one of the top 50 in the world.
But you can only render so many raytracings before that gets old...
So what next?
What would you do with a 30GHz+ 'supercomputer' cluster? Now that you can put this kind of horsepower in your home or office for ~$5,000 - what do you do with it?
There are some tasks (Doom III) that don't scale at all - you are going to get whatever frames per second that your fastest machine + your best video card give you regardless of how many additional machines you throw at it. There are other tasks - I'm looking for feedback here - that scale well enough to take advantage of the kind of horsepower we only dreamed of 5 years ago... so what are they?
What would you do with a 30GHz+ cluster comprised of 11 machines?
Off the Linux kick this week and back on the SSD trail.
SSD is solid state drives, and I love solid state drives. Rather, I love the concept of a zero latency unlimited bandwidth hard drive and SSD is about as close as I am going to get.
I have been using solid state drives (RAM drives, actually, but close enough) since about 1989 plus or minus... ever since my first i386 machine (I used them before then, but didn't actually own one until then) when my boot files created a RAM disk (vdisk.sys or ramdisk.sys, I forget which was included in the Compaq hack of MS DOS, version 3.31 ), usually about 256 kilobytes from my incredibly expensive two megabytes of caterpillar looking DRAM chips, copied COMMAND.COM onto that ramdisk, and set the comspec to find and use it from RAM. Mind bending performance gains, two orders of magnitude faster than my floppy disk and easily a single order of magnitude faster than the ST-251-1 I eventually bought to upgrade that floppy only system.
Operating systems and applications get fatter. System RAM now faster, and there is a lot more of it. Hard drives get larger and faster, much faster. CPUs get faster.
SSDs are out there today, Cenatek has one, and there are a few others. Actual Solid State Disks. And there is a pretty wide gap between a disk big enough to actually use, and one you can afford. $3,000 for a 4Gig one. $1,000 for a 1Gig one. OMFG. I have heard a few underground talks about HyperDrive III (or HyperDrive 3) being a 5.25" bay sized thing with an IDE interface and 8 slots for SDRAM of up to 2G chips each. Rumors and vaporgear right now, but I'm hearing something in the $800 range for the thing, plus the cost of memory. Going to also have a place to put a slaved hard drive and a small battery backup, and it will shadow the changes back to the drive from time to time.
Anyways, Cenatek also has a driver level RAM drive that uses system memory, costs $69 and has a 30 day free trial (30 use actually, so don't reboot too often). It has some cool new features like shadowing the virtual disk (the ramdisk) to a file on your hard drive, saving the contents to that file during system shutdown and restoring the contents during boot up, and occasionally updating that file during regular operation (you set a time frame between updates) so when your XP (did I mention I'm doing all this on my WinXP Pro machine?) crashes you don't completely lose the contents of your ramdrive. Pretty damn cool.
I have 1.25G in this machine now, and if I find myself loving the performance gains then going to 2G (adding two 512M chips at about $100 apiece) is going to put the price of a SSD in the 1G to 1.5G range (if I leave 512M for Windows to run in) is $270 : $200 for the memory and $70 for the software.
What am I going to do with a 1.5G ramdrive... that's the next question.
Using the 256M one I'm playing with during the limited demo, I have already moved my \Internet Temporary Files directory there and it seems to make a noticable difference. It seems a little faster but I haven't done any serious emperical testing to be sure. Of course I have copied file back and forth from hard drive to ramdisk, that hauls ass (100 megabytes per second writing to the ramdrive) but I don't do a lot of that in real life. It is too small to install any applications (read : games) onto at 256M.
1.5G doesn't give me much more than the 256M I'm currently using. Still not large enough to copy an entire virtual machine onto for making my VMware stuff run faster. Can't actually run the OS from the ramdisk, 'cause Win XP would freak out about that, and because it is a Windows driver level application. I could put a chunk of my swapfile on it, since Windows swaps shit to the swapfile no matter how much memory you actually have... that may make a difference, if it doesn't freak out on me in the process (not sure which loads first - the ram disk drivers, or the swapfile.)
I'm open to suggestions, and ideas, and general feedback.
Well after a year or two of cringing every time I heard the phrase 'SUSE Linux', knowing that there was only one Linux and RedHat was its prophet... I was forced to install Suse 9.1 this weekend and all I can say is 'thank you God.'
Don't get me wrong, RedHat 9 is good. Good like sex with a real woman. But SUSE is great, as in sex with two women at the same time. It uses the same xwindows subsystem that Knoppix (STD distro) uses, which hauls ass in VMware, and the core back end is very similar to RH9. I guess that's what a year of progress gives you, given when Suse 9.1 came out (recently) and when RH9 came out (a year ago, roughly.)
Anyways - there are still some quirks, but bit for bit I like SUSE 9.1 a LOT. Next step is to see how well Eclipse and the Websphere WSAD environment cooperate with it. If you are going to do Linux, give it a look.
Well for those of you following the saga of Linux in VMware on a Windows host, it continues. I upgraded to VMware 4.5 from 4.0 and I have to say it was well worth the upgrade. Two massive improvements : it lets you use up to 4G of memory for your virtual machines (up from the 1G in v4.0), and the drivers and performance for Linux (RH 9) are a lot better.
Before I was doing it, but I wasn't digging it. Now - I'm digging it. RedHat screams now - it is easily as comfortable to use (fast too) as the native Windows host OS.
If you are doing the VM thing, go get the upgrade.
All this time I have been jealous of the Debian users, only to find out I was one of them. Hows that for a discovery.
For the record, the Knoppix-STD distro live CD is based on Debian so I am unwittingly a Debian user also. Given that, here's a recap of my perception of the differences between Debian and RedHat.
Debian is hella fast, for the longest time I was chalking it up to the KDE XServer on the Knoppix I ran, and Gnome (or is it GNOME?), the default in RH8/9. I figured out how to run KDE on my RH9 box and it was pretty much the same. Shit for regular day to day user crap (ie, surfing the web, FTPing files around, reading/.) I am actually happier in the Knoppix KDE environment than anywhere else - just because it seems so much faster, I like the virtual desktops, tabbed browsing in Mozilla, no worry about spyware, and built in pop-up killer. Granted I'm running it from the hard drive (actually in a VMware virtual machine on a WinXP machine, how ya like them apples?) and not off the CD, but still - smoking fast and if I don't like where that session went I just kill the VM and the next time I fire it up it is just like I like it. The RH install is persistant and my files are there when I come back later (this is generally a plus) and it is giving me the opportunity to spend lots of time in the command shell, but I'm still not comfortable with it enough to stay there for any length of time.
She was bright, cute, educated, interested, and wanted to skip from email to IM. And she was on AIM (AOL Instant Messenger). DOH
She asked me to download AIM and make an account - that wasn't gonna happen. But I did spark up Gnome and find GAIM, plugged in a username / password and was up and chatting. Decided it would be cool to use GAIM to chat up my existing MSN network of friends, but I had to update my system first.
Learned all about tracking down dependency issues, finding all the SourceForge RPMs for all the packages I needed, learned some syntax for RPM (rpm -Uhv installs a package, I figured that out...) and finally got GAIM version 0.75 running and now I am up and running with my MSN/IM buddy list.
Probably never meet her, but if I do I will have to thank her for pushing me deeper into learning how Linux works, inter-relates, and updates.
Ok I did it, went to www.knoppix-std.org and downloaded an iso of their latest release, burned it to CD, and popped into an older laptop to see how I liked it. I am not entirely trusting of the world, so I popped it into a disposable laptop just in case..
Dell Latitude CPiA - 366MHz with 192M RAM. 2M crap video card pushing a 1024x768 13" display. 3Com 10/100 nic, wired.
First impressions : very nice. It recognized my hard drives although it didn't seem to write anything to them during boot up. Makes a ramdrive and runs everything from there. If I have to guess, I would guess that persistance can be gained by copying the contents of the ramdrive somewhere more permanent, but I haven't gotten that far yet.
On my PII/366MHz box with 192M of RAM it actually runs very fast, in fact this is better than RH9 installed in a VM (VMware running on an XP Pro host with a Gig of RAM and a P4/2.4GHz w HT) - athough the other system is using Gnome, and I'm using KDE in Knoppix. Still - other than the occasional performance hit when reading from the CD-ROM it is pretty nice.
I will keep the CD handy, lets leave it at that.
I just created a VM on the other machine, used this.iso as a virtual CD and started it up, and it absolutely screams. Now I just need to figure out persistance between sessions.
Update - DOH. regular ol' Knoppix is www.knoppix.org, the -std version is the security tool distro. I was wondering why this one was just chock FULL of sys/admin and security tools.
Well I learned this weekend the hard way that it isn't worth upgrading a system with 1.25G of RAM to a full 2G. I am running VMware on that system, was counting on being able to leverage the additional memory but VMware workstation will only use 1G, plus whatever for the host OS... 1.25G is just about perfect.
Product Documentation and Literature... it's amazing what kind of things you can learn by reading it before assuming anything about a package.
Well those of you following this thread know that I am playing with Linux in VMware, running on a MS host. I recently put XP Pro on a P4 2.4GHz HT box and installed VMware on it, was able to make a two-box shootout between that machine and my reigning champion (the P4 2.4GHz nonHT box running 2000 Pro.)
For the record I already did this with 2000 Pro running on the HT box and the non-HT machine was faster by a touch (I'm guessing 5% off the cuff.)
Under XP Pro, the HT box is faster than the non-HT box running 2000 Pro by about the same difference. Start the vmLinux sessions at the same time, the one running in a VM on the XP host gets to the login prompt about 5-10 seconds faster, start Gnome and it is done about 4-6 seconds faster.
I attribute this to XP's handling the hyperthreaded CPU better than Pro did, but it might have something to do with the XP box being a totally fresh install and a recently defragged hard drive. 3% or so isn't statistically significant, unless they are running side by side I can't tell the difference... but it is (a little) faster.
I haven't tried disabling HT on the XP box, expect updates here if I do. Note - I am also considering running two instances of the same VM from different directories to see if the HT has any effect on performance when multiples are running.
For what it is worth : XP box stats : P4 2.4GHz HT, 1.25G pc3200 RAM, and a fast Hitachi drive. Win2000 Pro box stats : P4 2.4GHz non-HT, 640M ECC/Registered pc2100 RAM, fast Western Digital hard dive.
Wow - and to think I was ready to go out and buy a new damn Handheld PC. Maybe one of those new Dell Axim's or the new one from Viewsonic that has.NET extensions... I have had a HP Jornada 680 for about 2 years now - mostly a toy and I had just about end-of-life'ed it because it is slow with its little 133MHz SH3, outdated with its WinCE 2.11/3.0 operating system, and mostly because the Internet Explorer available on it is like IE3.0 - nothing more intense than www.craigslist.org will display. I had dreamed oh-so-long-ago about a TermServer client but somehow never made the jump. Well today I downloaded the client, configured my Windows 2003 Server EE machine to allow TermServer clients and got it all working. Holy shit - this gave my little 680 a new lease on life. It is as if I had upgraded to a solid state laptop running WinXP on a low to mid range PIII, 256M, and a 640x240 display. Mind you the keyboard is a little hard to do lots of typing, and the 640x240 screen is a little cramped, but I am doing this posting from it... so it is easily usable. Better than usable - it is pretty nice.
I wish I had done this a year ago. In about two hours this thing went from $250 paperweight to a 2 pound half size solid state wifi connected screaming powerful laptop with a touchscreen a 12+ hour lifespan on the battery.
Actually this post was brought to you by the power of VMware. My journal musings are all done while I am in Linux (Redhat 9.0, using Gnome, connected to/. in Mozilla)... running in a virtual machine a'la VMware running on a Windows 2000 host.
I have a few machines. I want to experiment with things like Windows 2003 Server EE and Linux. But I don't have enough machines to do EVERYTHING I want to do... what to do, what to do...
Virtual Machines. VMs bridge the gap from game machine to powerful business tool. Nobody running a single instance of Windows XYZ Server is going to be able to fully utilize a quarter million dollar 16-CPU IBM x440 with 32G of memory regardless of what application(s) they run. No one application that you or I are going to run is capable of using all the horsepower of a full z-Class IBM mainframe - this is why z-Class mainframes use virtual machines : to slice up the power of that machine and make it available to a bunch of independent virtual computers all at once.
I have a virtual Redhat 9.0 machine (this one.) I have several virtual Windows 2003 Server Enterprise Edition machines to play with DMZ stuff. I have a clean virtual Windows 2000 Professional machine as a template to deploy test things to (copy the VM files to a new directory, start it, rename the machine, and start installing - takes about 10 minutes to roll out a new machine, six new machines fully configured and installed in under an hour.)
You can burn the VM files for a machine to DVD, take them to a new location or different machine, copy them to the hard drive, fire them up and be on exactly the same environment you left behind.
You can have several VMs running on the same machine at the same time, talking to each other over TCP/IP - just like regular machines. To the rest of the network they are no different than stand alone boxes.
Free 30 day demo at www.vmware.com - that is all it took to convince me. I encourage everybody to try it. Once you do, you won't look at computers the same.
No I don't work for them, good thing too because God only knows what EMC is going to do before the are done 'reorganizing'.
Well I found my x-ray goggles, ported to Linux. Check out http://www.unixtree.org for a close approximation to the best thing since sliced bread. For those that don't remember their history, in the beginning there were IBM AT class machines running DOS and it was good, but a decent file manager was still to be found. XTree came along and everything was good. XTGold 2.x came out and it was possible to do things on a computer that were simply amazing. XTree was ported to Windows wherein it sucked bad, was sold to Symantic (?) and eventually ended. Too bad, because as a CUI file manager it was the best.
Kim Henkel at www.ztree.com made an awesome clone for Win32 systems, even better than the original now that it wasn't memory limited to 535k or whatever. Go see Kim if you are a Win32 user, he will hook you up.
I found UnixTree this weekend, someone else must have also been an XTree junkie and wrote it, compiled it for every flavor of Unix or whatever including RedHat. It runs from the command line, and runs even better in Gnome (can go full screen, see more info at one time.)
Anyways seeing and interacting with the file structure via a familiar interface, being able to browse the directory structure and look at the contents of files without loading them in an editor or worrying about breaking something... really opens up a system for better understanding.
I go out and buy a hundred dollars or so worth of manuals (Sams Teach Yourself Red Hat Linux 9 in 24 hours, and O'Reilly Linux in a Nutshell), spend hours downloading the RH8 and RH9 distros, burn em to CD, spent days futzing with the RH9 install not getting it to work with the NIC on my virtual machine, install 8, reinstall 9 and get it talking to the world but now OpenGL doesn't work, spend half an hour or so a day at the (bash I think?) shell learning the syntax that most of you take for granted, no friggin XTree for the CUI, and yesterday RedHat announces they are dropping their desktop version of Linux, and more maddening "Red Hat's CEO Suggests Windows For Home Users."
Talk about pulling the rug out from under me.
Doesn't matter, I am going to learn this damn environment anyways. In fact, all journal postings are done exclusively from within Mozilla on my RH Linux 9 machine.
The good news of course is that Novell is adopting some flavor of Linux - maybe they can do for Linux what they did for WordPerfect. Oh wait, that would suck. Then again operating systems, network performance and NDS put Novell on the map so maybe they will do better with Linux than they did with WP, as I recall 4 years ago (starting with Netware 5.x) they even used startx and some X server to put a GUI on Netware - so they have been dabbling with the idea for a while.
Ok - I take back all the ignorant crap I spewed over the past year or so, general blathering in tech threads when the OP wanted to know how to get his Linux boxen talking with his Windows network and share files in either direction.
I am guessing it has something to do with Samba, but I haven't got it figured out yet, and given none of my other machines are running an FTP server the only way to get shit from one machine to the other is to FTP it totally off-site, then FTP it back to the recip machine. BLARG!
I may get Samba working, or I may just run an FTP server on one of my other machines.
Stay tuned to find out which.
Oh yea, in case anybody else running RH9 / Mozilla - is it my imagination or does Mozilla run a little slower than IE 6.0? Might have something to do with me running it in a VM, but I have Windows VMs also and their IE doesn't seem to bog... wasn't sure if that was normal. I sure love the setting in Moz to turn off popups, that is enough to switch for right there.
Well crap. Had Redhat 8 installed and running, figured it was cool but wanted to be running the latest release so I installed a new VM with Redhat 9. Followed the instructions quite a bit better the second time and actually got it running nicely. Deleted RH8 because... well surely RH9 is better.
Now that RH8 is gone I find out that the cool screen savers (Engine, End Game, etc...) don't want to run on my RH9 install. Damn. I am guessing it has to do with the OpenGL implementations or something - the hardware (or virtual hardware) hasn't changed so either I boned something during the install (possible) or the upgraded version of the display drivers doesn't like my virtual hardware. Not sure which. I will continue to jack with it, see what I can find out.
Other than that I am not seeing any end user experience differences between RH8 and RH9.
On a side note, I sure wish I had the opportunity in college to spend four years in a Unix environment instead of that damn Prime1 / PrimeOS environment. Blarg!
Edit : I think I figured it out - under RH8 I installed and used RedHat's X engine, but under RH9 I installed the VMware tools and that X engire is supposedly enhanced to run in a VM. Maybe 'enhanced' is developer talk for 'doesn't use OpenGL'.
Ok, hadn't been doing the journal thing up to yet but I figured it is time. Actually writing this within Mozilla, running Linux (Redhat 9.0 if anybody cares) that I this weekend got installed and running. Damn, last time I messed with Linux was when RH5.1 or thereabouts on a 486.
The install went fairly smooth once I figured out that the supplimental install instructions for installing Linux on VMware were not suggestions, nor were they optional. Speaking which, yes (horrors!) this is Linux running in a window on a Windows 2000 box via VMware.
Maybe I will use the Journal to track my explorations into Linux. Yaya I know, odds are if you are actually reading this you have been using Linux forever and yaya I'm just a damn newbie. So I will have stupid issues and problems - laugh with me, not at me.