Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Get Speed-Booting with an Open BIOS

ScuttleMonkey posted about 7 years ago | from the speed-booting-next-up-for-olympic-sports dept.

Hardware Hacking 235

An anonymous reader writes to mention that IBM Developer Works has a quick look at some of the different projects that are working on replacing proprietary BIOS systems with streamlined code that can load a Linux kernel much faster. Most of the existing BIOS systems tend to have a lot of legacy support built in for various things, and projects like LinuxBIOS and OpenBIOS are working to trim the fat.

Sorry! There are no comments related to the filter you selected.

Speed Booting? (1)

valkabo (840034) | about 7 years ago | (#20929047)

Reminds me of dial up isp's for some reason...

Flash drives (4, Interesting)

drivinghighway61 (812488) | about 7 years ago | (#20929055)

Speeding up BIOS processes combined with flash boot drives will seriously decrease loading time. Are we closer to instant-on computers?

Re:Flash drives (3, Interesting)

CastrTroy (595695) | about 7 years ago | (#20929125)

I seem to remember the Commodore 64 being instant on. Granted our current computers are much more complicated than a Commodore 64, but it would be nice to get back to that instant-on era. Everything else seems to have gotten faster, or remained the same speed, the only thing that seems to continually get slower is boot times.

Re:Flash drives (1, Insightful)

Nazlfrag (1035012) | about 7 years ago | (#20929189)

I can remember the Amiga taking 1.5 seconds to boot into a full multitasking GUI, and rendering a fractal while it boots. One day they'll get it again, thanks to the success of OSS.

Re:Flash drives (1)

joe 155 (937621) | about 7 years ago | (#20929381)

I had an Amiga, they were ace. It was so long ago though and all I really used it for was Pang (what a game) so I can't really say how good the boot time was to full gui and fractal... But I am reminded of a saying that I read somewhere I can no longer remember (perhaps wikipedia); software is getting slow faster than hardware is getting fast - or at least "faster" than the software can take advantage of. I'd say that we might never get back to one second boots with any major OS (although I have heard good things about the insanely obscure Amiga 4 OS - but don't even think you can buy hardware to run it on).

Re:Flash drives (2, Informative)

Saxerman (253676) | about 7 years ago | (#20930433)

The original Amiga 1000 had the Kickstart ROM chips, which allowed them to boot nigh-instantly. This included the important parts of the OS, and later even drivers and the kitchen sink. You would literally have a splash screen for a second, and then having a functioning computer complete with GUI. Of course, this means surgery was required to swap in a new Kickstart ROM. And as later software required different versions of Kickstart to run, we started playing with different software kickers which allowed you to load different versions via floppy disks.

The later versions (or, at least, the A500 and A2000 I used) stopped hard coding Kickstart on a chip, and you then needed to load the entire OS from a single 1.5MB floppy. Or, for the more affluent, a hard disk.

Re:Flash drives (3, Insightful)

SenorCitizen (750632) | about 7 years ago | (#20930623)

No, that's not quite right. The original Amiga 1000 didn't come with Kickstart ROMs, because the OS was still in a state of change. Instead, you had to load Kickstart from disk, and it ate up 256k of the 512k of RAM installed. Later Amigas came with ROM Kickstarts and could be started without a disk. The full Workbench environment still had to be loaded from disk, just like with the A1000 - on which you actually needed two disks to get the whole OS up and running.

The Atari ST, on the other hand, had the whole OS in ROM, except for the very first models. Even STs weren't instant-on though, because the bootloader would waste at least half a minute looking for a disk to boot from - it was actually faster to have a GEM disk with custom settings in the drive when turning the power on than booting from ROM only.

Re:Flash drives (1)

walt-sjc (145127) | about 7 years ago | (#20930667)

I think you have it backwards there partner... My A1000 (an original from Oct. 1985) required the "Kickstart" boot disk, and THEN you booted "AmigaDOS".

My A2000 had Kickstart in ROM requiring only the AmigaDOS disks. I upgraded those from 1.3 to 2.0.

The reason was that the ROM portion (Kickstart) was in flux when the A1000 came out, and they knew that they would need to update it several times, and didn't want to deal with swapping ROMS all the time. The original A1000 was hardly polished.

None of the Amigas I had (A1000, A500, A2000, and an A3000) were instant on at all. They all required AmighaDOS via floppy or hard disk to boot the rest of the way.

Re:Flash drives (1)

Achromatic1978 (916097) | about 7 years ago | (#20930973)

None of the Amigas I had (A1000, A500, A2000, and an A3000) were instant on at all. They all required AmighaDOS via floppy or hard disk to boot the rest of the way.

Pah. First thing I did when I got 1MB, then 1.5MB of memory(!) was follow the instructions in the manual to make a "Recoverable RAM Drive" (RAD:) that was bootable. :)

Re:Flash drives (2, Insightful)

Junior J. Junior III (192702) | about 7 years ago | (#20929495)

Sure, the C=64 was instant-on, but you had like 30 second seek times for the floppy disk, which is where anything exiting to run lived. If all you wanted to do was simple command line instructions to "load $program * , 8" you could call it "instant on" but you got almost nothing for it, and to get to any user app functionality up and running, it still took a long time, vastly longer than it does now.

Re:Flash drives (3, Interesting)

Waffle Iron (339739) | about 7 years ago | (#20929523)

Sure, the C64 booted in a jiffy. Then it took 5 minutes to load a 50 KB app from the floppy drive. (Which is kind of silly, since the floppy drive had a CPU inside with as much horsepower as the main system unit itself.)

Re:Flash drives (0)

Anonymous Coward | about 7 years ago | (#20929989)

What the hell does that have to do with how fast the drive can read data off of the disk? If you could shove a Pentium in there, it wouldn't make the mechanics or the read head any faster.

Re:Flash drives (1)

anss123 (985305) | about 7 years ago | (#20930589)

The reason it was silly is because the CPU was very poorly utilized. If one loaded in a more intelligent prog into the C64 floppy CPU loading times could be halved, if not more. As it was, the dumb floppy drives of other 8-Biters were just as fast if not faster - cheaper too.

Re:Flash drives (1)

Waffle Iron (339739) | about 7 years ago | (#20930703)

What the hell does that have to do with how fast the drive can read data off of the disk? If you could shove a Pentium in there, it wouldn't make the mechanics or the read head any faster.

The mechanics were fine. Back in the day I shelled out a good bit of money to buy a plug-in ROM upgrade that sped up floppy access by ~8X. The bad performance was a software problem, probably made worse by the overdesigned floppy drive that could execute more bloated software than the job required.

Re:Flash drives (3, Interesting)

alan_dershowitz (586542) | about 7 years ago | (#20930271)

If you plugged in a cartridge it was instant-on for those apps too. Now there's even a peripheral, the MMC64 [wikipedia.org] that lets you use SD cards on your Commodore 64, so I don't see anything that indicates we couldn't have instant on for writable media either nowadays, which was the point of the original post.

Incidentally, did the 1451 drive have as fast a CPU as the Commodore 64? I know the 1451-II's CPU was actually faster, and you could actually offload CPU processing to it across the serial interface, and some games even did this.

Re:Flash drives (0)

Anonymous Coward | about 7 years ago | (#20930347)

The problem with the loading speed was the connection, not the processors or anything - the wire protocol is bad to begin with, but tolerances on both sides (longer wait cycles to be "sure") killed it entirely.
Oh, and giving the floppy more ram (and let it use it) would have helped, too. in fact, there were speed up mods that did exactly that: enough ram to keep an entire track in there + rom modification.
most mods replaced the cable, though

Re:Flash drives (3, Funny)

rucs_hack (784150) | about 7 years ago | (#20930641)

All you C64 people, grr.

My spectrum was awesome, and by dint of the fact that I couldn't afford a C64 (or even a Vic 20), I opted for the '48k ZX spectrum beats your computer any day' line of reasoning, and affected temporary blindness when anyone started showing off sprites.

Ah yes, the hours of tapping away on a rubber keyboard. Hungry Horace, oh how many evenings you ate.

I took it out of storage and showed my son last year. He looked at it in a puzzled fashion and asked where the dvd drive was.

Crying is not manly, so I just mumbled and put it away again..

Re:Flash drives (4, Informative)

Cyberax (705495) | about 7 years ago | (#20929559)

I work with embedded systems, and my MIPS-based 166MHz board boots Linux in about 5 seconds, kernel loading starts almost immediately after power on.

I always wanted to have the same capability for my notebook. Sigh...

Re:Flash drives (1)

AnotherBlackHat (265897) | about 7 years ago | (#20929649)


I seem to remember the Commodore 64 being instant on.


Really? I remember waiting a very long time.
One of the major advantages of Fastload (TM) was that it by-passed the god-awful slow memory test on power up.
I suppose "instant" is a matter of perspective.

-- Should you believe authority without question?

Re:Flash drives (1)

Source Quench (857046) | about 7 years ago | (#20929689)

Not forgetting the Acorn Archimedes... I can remember *beep* and then the desktop was there. Ah those were the days.

Re:Flash drives (1)

gslj (214011) | about 7 years ago | (#20929799)

The most satisfyingly quick computer I've ever had was an Apple IIe that booted from a flash ram card. If I set it up to boot into a program selector, one second flat. If I booted into AppleWorks 4.0, with a bunch of add-ons, four seconds.

-Gareth

Re:Flash drives (1, Insightful)

Anonymous Coward | about 7 years ago | (#20929315)

I have an instant-on computer. It's a Commodore C64.

Re:Flash drives (2, Interesting)

the_humeister (922869) | about 7 years ago | (#20929377)

In Windows you can have the computer's current state be saved and loaded the next time the computer is booted. Is there an option to load the state of the machine on a fresh boot? That would save time on reboots.

Re:Flash drives (2, Interesting)

vux984 (928602) | about 7 years ago | (#20929785)

Is there an option to load the state of the machine on a fresh boot? That would save time on reboots.

I have 4GB of RAM (2x2GB) with Windows x64, and expect to have 8GB within the year. I've tried the hibernate or whatever its called. Its not really faster. The time time to save and load that 4GB file is non-trivial. In theory its nice when you've got a lot of open stuff on the go, but then I don't trust it enough not to save all my work properly anyway.

Overall, I don't think its that great, and I particularly don't like that its the default in vista. For example, when support tells you to reboot they want a fresh start, half the time the users just turn it off and on, instead of 'reboot', and under vista that just bounces into hibernate and back and accomplishes absolutely nothing.

Re:Flash drives (1)

hjf (703092) | about 7 years ago | (#20929959)

what do you want 4GB of ram for anyway?

OK let me rephrase: what do you want 4GB of RAM for anyway, if you don't have a RAID 0+1 array of Seagate Barracudas to make disk writes quick?

I use my (desktop) computer with S3 suspend. 5 seconds later and it's on again (it takes more time for my monitor to wake up). There's only one problem though: sometimes, the Bluetooth dongle takes a longer nap and I have to wait about 30 seconds to have my keyboard back.

Re:Flash drives (1)

Pad-Lok (831143) | about 7 years ago | (#20930303)

what do you want 4GB of ram for anyway?

Thats right, because 640K is good for everyone!

On a serious note, maybe he/she is a 3D artist? CAD? Graphics? Videos? There is never too much memory to throw around.

Re:Flash drives (1)

vux984 (928602) | about 7 years ago | (#20930527)

what do you want 4GB of ram for anyway?

My memory usage when running my usual set of apps (excluding vmware) is around 2.5GB to 3.0GB. My comment about upgrading to 8GB was based largely on my use of VMWare; and the fact that I run various linuxes, along with XP Pro and Vista x32 as guest OSes (not all at once, but usually more than one); and that uses gobs of RAM.

OK let me rephrase: what do you want 4GB of RAM for anyway, if you don't have a RAID 0+1 array of Seagate Barracudas to make disk writes quick?

I currently spend very little time swapping to the page file, so the disk write speed is actually less of an issue. That said, I have a fast sata2 drive with NCQ enabled, and while not as fast as a raid, is faster than any single disk I've ever used before. (I also have an eSata drive hooked up for backups, etc).


I use my (desktop) computer with S3 suspend. 5 seconds later and it's on again (it takes more time for my monitor to wake up). There's only one problem though: sometimes, the Bluetooth dongle takes a longer nap and I have to wait about 30 seconds to have my keyboard back.


My wired usb mouse has the same issue [Razer Copperhead - its hard finding a decent laser mouse for left handed people, I think logitech and microsoft hate us.]; just waking up from sleep it will often take 15 seconds after the rest of the PC is ready to go. Drives me nuts.

Re:Flash drives (1)

walt-sjc (145127) | about 7 years ago | (#20930833)

what do you want 4GB of RAM for anyway

Not sure what HE wants it for, but I use it for running multiple virtual machines. Also, the extra ram is nice for disk cache - especially if you are compiling.

Re:Flash drives (0)

Anonymous Coward | about 7 years ago | (#20930567)

It would help if Windows wasn't what's known in the industry as "fucking retarded" when it comes to suspend/restore.

Unless you're actually using all 4GB, and chances are you generally aren't when you're ready to close the laptop for the day, there's absolutely no need to save all 4GB of memory on suspend/restore.

It would be even more helpful if most applications were designed to properly support suspend/restore and ran their GC and freed up pages that they don't need prior to suspend - but most don't. Of course, that doesn't help because Windows does merrily save the ENTIRE contents of memory even though it doesn't really need to.

In fact, on a well designed system, you only need to page out EVERYTHING and just page in enough to start and slowly page things back in as needed. There's no need to load the entire page file before starting.

Re:Flash drives (1)

langelgjm (860756) | about 7 years ago | (#20931013)

I also have 4 GB of RAM on XP x64, and I use S3 suspend (Suspend to RAM - i.e., not hibernation) all the time. For me, it is significantly faster both to shut down and come back up, and it has the great advantage of turning off all the fans, etc., on the computer, so that it doesn't make any noise. It works flawlessly probably about 99% of the time.

I agree though, hibernation is often more of a pain than it's worth.

Re:Flash drives (1)

Korin43 (881732) | about 7 years ago | (#20930033)

Doesn't that get rid of the point of rebooting? I use hibernate when I want to stop using power, and reboot with Windows gets screwed up..

Did you ever notice? (0)

Anonymous Coward | about 7 years ago | (#20929487)

I'm sceptical. One would hope, but I would absolutely swear that the time to go from "power on" to a "boot:" prompt has stayed EXACTLY the same over the past 25 years.

You would think that modern CPUs would get through the BIOS quicker than they did for the 8086, the 286, the 386, etc. For each new, faster CPU, I've been amazed that they all still take so long, and take about the same amount of time. The latest CPUs are possibly even slower (that's certainly the case with the Itanium).

Go figure.

Re:Did you ever notice? (2, Interesting)

BUL2294 (1081735) | about 7 years ago | (#20929851)

You honestly think your BIOS is slow? Ever watch a 4.77MHz IBM XT with 640KB RAM go thru its memory test and POST?

16 KB OK.....(5 sec).....32 KB OK.....(5 sec).....48 KB OK.....(5 sec).....64 KB OK.....(5 sec).....

I seem to remember it taking 1 minute to go thru the memory test.

Re:Did you ever notice? (2, Informative)

DrSkwid (118965) | about 7 years ago | (#20929879)

You can be as skeptical as you like. LinuxBIOS has been doing 3s boots.

Re:Flash drives (0)

Anonymous Coward | about 7 years ago | (#20929501)

sun bios open bios is open now and fast for suspend used on olpc
and sun os which can suspend fast

Instant-on computers (1)

Z00L00K (682162) | about 7 years ago | (#20929627)

weren't uncommon in the early 80's - but they had a ROM basic built in. When we got the PC there was suddenly need for a lengthy and cumbersome procedure before the computer even was remotely usable.

All those procedures performed in BIOS today are often unnecessary unless you run a legacy operating system like DOS that actually uses the BIOS. Linux as an example only uses BIOS for a limited amount of tasks while it does most of the hardware management in the kernel without much need for BIOS to be around. So replacing or at least speeding up the BIOS would be very nice.

What about Abstraction? (4, Insightful)

CodeBuster (516420) | about 7 years ago | (#20929111)

Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)? The violation of abstraction merely for performance improvements is something that engineers should generally be very reluctant to do.

In theory, yes. (5, Insightful)

khasim (1285) | about 7 years ago | (#20929205)

But the problem is that the BIOS's cannot be trusted today.

So the more advanced operating systems probe the devices themselves to see what capabilities are available.

We've arrived at the point where we need to choose between updating the BIOS's on the motherboards every time a new capability is added (and all previous motherboards) ... or just simplifying the BIOS to the point where it can boot the OS and allow the OS to probe everything.

It's easier to update the OS than the BIOS.

Re:What about Abstraction? (1)

psbrogna (611644) | about 7 years ago | (#20929319)

Abstraction is important if you're sending boxes out the door not know what they'll be doing. If you know what they'll be doing (ie. booting a linux kernel, etc), you can optimize them for your selected environment.

Re:What about Abstraction? (4, Interesting)

CyberLord Seven (525173) | about 7 years ago | (#20929609)

Danger! Will Robinson! Linux boxen tend to be used far longer than Windoze boxen.

Purely anectdotal, but I see a LOT of Linux boxen that are very old running not so old Linux kernels.

This means, over a period of time, you have a greater chance of creating a NEW Linux only legacy support issue with newer kernels running on old machines.

This should not stop progress, but it is something that should be recognized up front.

Re:What about Abstraction? (2, Interesting)

phantomlord (38815) | about 7 years ago | (#20930023)

My nntp/webserver:

cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 8
model name : AMD-K6(tm) 3D processor
stepping : 12
cpu MHz : 451.021

# cat /proc/version
Linux version 2.6.22-gentoo-r6 (root@hell) (gcc version 4.1.2 (Gentoo 4.1.2)) #1 Fri Sep 14 15:07:46 EDT 2007
IIRC, I bought the processor and motherboard in May 1999, probably around the same time I finally registered my /. account.

Re:What about Abstraction? (1)

Intron (870560) | about 7 years ago | (#20929665)

So if you know they will be running Windows, it should be OK to prevent loading any other OS? WRONG

Re:What about Abstraction? (1)

MBGMorden (803437) | about 7 years ago | (#20929813)

If it's my computer then damn straight.

See, I've got a Linux machine. I've got 2 Macs. I've got 3 Windows machines (the one on my desktop, the laptop I use to play movies on my TV, and the laptop that I actually use as a portabl), and a hacked X-box - yeah, I'm a geek.

That Windows desktop machine? It's gonna run Windows. FOREVER, until I throw that bad boy in the trash it's gonna be a Windows machine because I have other OS's on other machines. And no, I don't really care if whoever pulls it out of the trash dump can get Linux to run on it.

So, even though I only reboot when the power goes out (my Windows machine generally stays up just as long as my others), which means once every few months, if I could shave 10-15 seconds off of the boot time by making that a Windows-only BIOS, I certainly would. Same with the Linux machine. The Macs already don't really use a traditional BIOS, so they don't count.

Re:What about Abstraction? (1)

j35ter (895427) | about 7 years ago | (#20930131)

Uhh, aside from setting up memory parameters and preparing the bootloader, I don't really see any real need for a complicated BIOS...except for the (medicore) overclockers who get a hard on when they see a fat BIOS

Re:What about Abstraction? (4, Informative)

billcopc (196330) | about 7 years ago | (#20930477)

You, like many others before you, are confusing BIOS with what was once called "CMOS Setup".

The BIOS is essentially a set of low-level device drivers for the motherboard and basic peripherals (keyboard, display). Overclockers don't care about it, as long as it works.

The "CMOS Setup", or more appropriately System Setup, is an interface to configure the motherboard's features. The fancier ones offer many tweaking options, some even have a minimal Linux OS like the Asus P5K3 Deluxe (extremely handy for pre-boot stuff - or web/media browsing). Overclockers love big feature-rich control panel on their board as they allow them to tweak their system to further heights, and offer added functionality like built-in flashing (from a USB key or hard drive) and "smart" overclocking which is like the opposite of Intel Speedstep :)

Re:What about Abstraction? (4, Insightful)

krog (25663) | about 7 years ago | (#20929341)

Modern OSes don't trust what the BIOS tells them, due to older BIOSes that can't be trusted. With this fact in mind, you can imagine how getting the BIOS mostly out of the way can gain a few seconds at boot time without losing anything practical.

Re:What about Abstraction? (2, Insightful)

ultranova (717540) | about 7 years ago | (#20929343)

Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)?

Why ? Does any OS actually use BIOS for anything except booting anymore ? AFAIR even most DOS programs bypassed BIOS screen routines (which is why redirection didn't work so well on DOS) and talked to the hardware directly. And I'm certain Linux doesn't use BIOS for hard disk access, since Linux can use the whole disk even if BIOS is limited to the first 120MB or so of it in some really old machines.

Re:What about Abstraction? (3, Interesting)

BlowHole666 (1152399) | about 7 years ago | (#20929391)

Funny that is what the OS is supposed to do also. But now they come with stuff built in. Maybe if the BIOS was left alone and the OS was fixed to do just what it was supposed to do and not worry about the rest of the crap.

It does not matter if I run Linux, or Windows they both start with crap running in the background. A normal user has no clue what is running. Why not when you install the OS you just ask "Do you want a Firewall? Do you want a Server? Do you want to update your system time over the internet? Do you want to back your computer up every night" It is like most systems just install bloat ware because they *think* normal users want this stuff. Or it provides security. Well write the OS correctly where you do not specifically need a firewall or anti virus, or updates every Tuesday.

Sorry got on a rant. I am just saying Let the Bios do its job...boot the system. Let the OS do its job, Schedule task, manage memory, and provide access to hardware.

Re:What about Abstraction? (1)

CodeBuster (516420) | about 7 years ago | (#20930491)

I am just saying Let the Bios do its job...boot the system.

It is still wise (at least in theory) to allow the BIOS to handle some low level hardware issues behind the abstraction barrier, at least until the OS specifically overrides a certain function for direct control (assuming that it is logical to allow such a low level override). The abstraction layer allows both the OS and the BIOS to vary independently without causing changes in each other and that is a good thing. Suppose, for example, that your mainboard wants to use the Acme Computer Inc 56IXPG chipset or something else that is low level and obscure. Shouldn't the BIOS handle the coincidental low-level on board stuff like that instead of the OS doing it directly? I agree that there is a line to be drawn at some point, the OS has to be free to manage hardware at *some* level after all (that is the whole point of the OS), but there is still value in abstraction so that the interface between the hardware and the OS can be "virtualized" to some extent (i.e. separating important hardware concepts like the kind you read about in your computer design textbooks from the sometimes coincidental details of a particular low level implementation).

Depends on Priorities (1)

smcdow (114828) | about 7 years ago | (#20929419)

Generally? What if performance is your goal?

Re:What about Abstraction? (1)

treyTTU (931851) | about 7 years ago | (#20929461)

besides just abstraction, what about POST. This is the primary time for the computer to make the decision it is unsafe to boot. POST codes are less useful than they have been in the past, but that doesn't take away from the importance of testing upon boot.

Re:What about Abstraction? (0)

Anonymous Coward | about 7 years ago | (#20929757)

Even Windows forgets about the BIOS and does hardware detection all over again.

Re:What about Abstraction? (3, Informative)

Anonymous Coward | about 7 years ago | (#20929993)

Isn't it more important for the BIOS to present an efficient abstraction of certain hardware resources that *any* OS can easily communicate with according to a standard interface than to optimize support, possibly at the expense of flexibility and abstraction, for a single OS (even if that OS is Linux)?

These guys are simply taking advantage of the fact that the BIOS is an unusably bad abstraction. Linux doesn't make BIOS calls, nor does Windows (since before Windows 2000). If you're booting Linux and XP, your BIOS is doing a bunch of slow hardware autodetection, and then passing the baton to your kernel, which ignores that and does its own faster and more reliable hardware detection.

In that sense, if you really want the BIOS abstraction layer, the first step would be to write a reliable one. Putting Linux in there is the logical first step. If you want to hack LinuxBIOS to do the full hardware autodetection, and then hack Linux to trust hardware info from LinuxBIOS, you're welcome to do so (though the benefits are unclear).

We broke this abstraction in Linux for reliability, not performance. If somebody wants to remove some useless old cruft to increase performance for free, I have no problem with that.

Re:What about Abstraction? (1)

empaler (130732) | about 7 years ago | (#20930713)

Linux doesn't make BIOS calls, nor does Windows (since before Windows 2000).
I was about to refute you, but according to MS KB321779 [microsoft.com] you are right on the money. Windows NT 4 (not really that surprising and Windows 98 (WTF?!) was "Plug and Play Capable". For some reason they're also asserting that WinME is one, but that must be a typo.

Deck chairs on the Titanic (5, Insightful)

BadAnalogyGuy (945258) | about 7 years ago | (#20929151)

The majority of boot time is spent initializing drivers and bringing the system to a usable state. The 3 seconds it takes for the BIOS to init the disk, locate the MBR, load the bootloader, and jump to it is negligible compared to the tedious hardware scanning and initialization done by the OS itself when it is finally loaded by the bootloader.

If you want to speed up the boot sequence, take a look at cutting the number of attached devices down to the bare minimum. Don't start any services during init. Do as little as possible to get the system to its usable state and you'll have minimized the boot time. Unfortunately, technology just doesn't work that way. System requirements (of both a hardware and a software nature) will require that you perform extra initialization at boot time, so any possible gains are already offset by the increased load.

Getting off of x86 may be one way to optimize the boot process, but how many of us really have the wherewithal to make an architecture jump from x86?

Re:Deck chairs on the Titanic (2, Insightful)

Chirs (87576) | about 7 years ago | (#20929383)

I don't know what hardware you have, but it takes a LOT more than 3 seconds for my machine to do its POST, check the floppy drive, check the CDROM, check the SCSI cable, find my hard drives, check the partition tables, and finally start up my bootloader.

Probably more like 30 seconds.

Re:Deck chairs on the Titanic (1)

jabuzz (182671) | about 7 years ago | (#20929457)

30 seconds, that is positively speedy. On my servers it is around 150 seconds before the bootloader even gets a look in. Admittedly a good chunk of that is as it spins up the drives in the RAID array one by one. You can turn it off, but if you turn all the servers on at once (like after a power cut and they automatically restart) there is a nasty spike as the load on the UPS is pushed to just shy of 100%.

Re:Deck chairs on the Titanic (1)

hjf (703092) | about 7 years ago | (#20930081)

what does the HD Spinup time has to do with BIOS loading speed?

Re:Deck chairs on the Titanic (1)

katani (1090285) | about 7 years ago | (#20930375)

Apparently, his BIOS won't boot the operating system until the hard drives are completed spun-up.

Re:Deck chairs on the Titanic (2)

sconeu (64226) | about 7 years ago | (#20930575)

Because it's so easy to boot from a hard drive that hasn't spun up yet.

Re:Deck chairs on the Titanic (5, Interesting)

KC1P (907742) | about 7 years ago | (#20929585)

You're absolutely right. It seems like every OS (including Linux) goes through this -- in the early days it boots much faster than the competition, but once people start routinely layering all kinds of junk on it then it starts taking minutes to boot even on super-fast hardware.

What really bugs me is how much of the startup config is done serially. A lot of startup tasks take time, and step N+1 has to wait until step N is finished whether or not it depends on that step. It seems to me that it would be worth the trouble to mechanize startup so that each step is isolated from all the others and knows which previous step it's dependent on and waits for only that step, while everything else cruises ahead in parallel. It'd be a big change from the way things are done now but it'd be worth it. Having my system stop dead for 60 seconds on every boot just because one of the NICs is unplugged (so DHCP isn't answering) is really annoying. Same deal with Apache choking on virtual domains ... one at a time ... if the name server isn't answering. All those "wait X seconds for Y to happen" things can really add up.

Also, Linux isn't the entire universe, and some of us really do use those legacy BIOS features. Backwards compatibility is the *only* reason the PC architecture has survived, so deciding to toss that to the wind now is just stupid. The cost is minimal (it's not like the code is going to change once it's written) and if whipping up a few tables and setting a couple of INT vectors is honestly adding dozens of seconds to the boot time, well that's just programmer incompetence, it's not the architecture's fault. The rest of the older BIOS code doesn't do anything if you don't call it, so this just sounds like an excuse to be lazy.

Re:Deck chairs on the Titanic (1)

hjf (703092) | about 7 years ago | (#20930133)

What I don't like about Linux is the amount of unnecessary services installed by default. For example, I have an old computer (P133/16MB RAM), and Debian 3.1 on it. Debian demands that I run Sendmail (or Exim, or Postfix). Why can't I live without a MTA? I doubt that regular home users actually send their e-mail through their local MTA, they probably use their ISP's SMTP, which, I think, it's the proper way to do it.

Re:Deck chairs on the Titanic (1)

DaleGlass (1068434) | about 7 years ago | (#20930729)

What I don't like about Linux is the amount of unnecessary services installed by default. For example, I have an old computer (P133/16MB RAM), and Debian 3.1 on it. Debian demands that I run Sendmail (or Exim, or Postfix). Why can't I live without a MTA?

Because the standard way to send mail on an unix box is to send it through a 'sendmail' program, which is provided by Sendmail (or Exim or Postfix)

I doubt that regular home users actually send their e-mail through their local MTA, they probably use their ISP's SMTP, which, I think, it's the proper way to do it.

That's because for example cron doesn't know how to talk SMTP. It calls sendmail which does know. With the configuration you mention the way to do it is to run a mail server locally which then relays the mail to your ISP's one.

The advantage of doing it that way is that you don't have to code a full SMTP implementation into every program that wants to send mail, and don't need to bother handling retransmission and every possible error condition, your mail server already knows how to do all that and can do it for you.

The disadvantage is an extra server, but you could run it from inetd, so that it's not really running unless something needs to be sent.

Re:Deck chairs on the Titanic (1)

modecx (130548) | about 7 years ago | (#20930139)

FWIW, most init scripts I've seen recently already background DHCP and stuff like that, so they are parallel to a point. So, if DHCP fails, it just doesn't sit there for minutes on end. As far as kernel modules and stuff like that... There's probably good reason they're checked/loaded in serial fashion, but you're right, it would be nice. The closer to instant-on we get, the better.

Re:Deck chairs on the Titanic (4, Informative)

Karellen (104380) | about 7 years ago | (#20930279)

It seems to me that it would be worth the trouble to mechanize startup so that each step is isolated from all the others and knows which previous step it's dependent on and waits for only that step, while everything else cruises ahead in parallel.

We're working on it... [ubuntu.com]

Re:Deck chairs on the Titanic (1)

Cycon (11899) | about 7 years ago | (#20930605)

Having my system stop dead for 60 seconds on every boot just because one of the NICs is unplugged (so DHCP isn't answering) is really annoying.

hell while you're at it, why not start paying attention to whether or not an ethernet cable is even plugged in in the first place?

windows has been able to re-start DHCP automatically if you unplug and plug back in a cable for years and years now, why can't linux?

easily my biggest pet peeve.

Re:Deck chairs on the Titanic (1)

TheVoice900 (467327) | about 7 years ago | (#20930067)

You apparently haven't used any machines with a good number of devices being initialized by the BIOS. Particularly machines with multiple network interfaces that support PXE booting, RAID and SATA controllers, and some other random devices can take over a minute just to get past the initial BIOS initialization. I manage several servers which take just long to get to the bootloader as they do to actually boot the OS. This gets tedious quickly if you're doing some work on the OS (think kernel updates and testing) which require multiple reboots. Not fun. I'm definitely all for trimming some of the fat off the BIOS.

Re:Deck chairs on the Titanic (1)

Karellen (104380) | about 7 years ago | (#20930241)

how many of us really have the wherewithal to make an architecture jump from x86?

Um, anyone running Debian [debian.org] . I recently changed (painlessly) from x86 to x86-64 (AMD64), but I'd be just as happy if the hardware were cheap and easily available to go to Sparc, Alpha, PPC or ARM.

Re:Deck chairs on the Titanic (1)

Lord Ender (156273) | about 7 years ago | (#20930265)

Some people say changing the BIOS around is like rearranging the deck chairs on the Titanic. That's not true; the BIOS isn't sinking. In fact, the BIOS is soaring; if anything, it's like rearranging the deck chairs on the Hindenburg.

Re:Deck chairs on the Titanic (1)

644bd346996 (1012333) | about 7 years ago | (#20930917)

The majority of boot time is spent initializing drivers and bringing the system to a usable state. The 3 seconds it takes for the BIOS to init the disk, locate the MBR, load the bootloader, and jump to it is negligible compared to the tedious hardware scanning and initialization done by the OS itself when it is finally loaded by the bootloader.
Hmmm... You must not have heard of ACPI. Every PC I've used that supports it takes more than 5 seconds to start loading grub, and some take a lot longer because of the timeouts for entering the BIOS configuration utility.

If you want to speed up the boot sequence, take a look at cutting the number of attached devices down to the bare minimum. Don't start any services during init. Do as little as possible to get the system to its usable state and you'll have minimized the boot time. Unfortunately, technology just doesn't work that way. System requirements (of both a hardware and a software nature) will require that you perform extra initialization at boot time, so any possible gains are already offset by the increased load.
We already have bootchart. It works, and can be used to configure a system that spends less time loading services than in the bios-controlled hardware probing.

The neat thing about stuff like LinuxBIOS and flash-based booting is that your system can theoretically send out a DHCP request before the hard drives have spun up. If you have enough flash space, you should also be able to put part of your X server in there, so that the graphics card can be put in the right mode and be ready to display the log-in prompt (or desktop) within a few seconds of when the OS starts getting data from the hard drive.

Getting off of x86 may be one way to optimize the boot process, but how many of us really have the wherewithal to make an architecture jump from x86?
This bit seems a bit odd, since the only x86-specific stuff in this issue is the BIOS itself, and the article is about replacing that. The actual processor architecture isn't a bottleneck.

Why not EFI? (3, Insightful)

Anonymous Coward | about 7 years ago | (#20929175)

Why not use EFI [wikipedia.org] ?

It does what you want and has been in desktop computers (Macs) for over a year now.

Re:Why not EFI? (1)

seebs (15766) | about 7 years ago | (#20929929)

When I originally wrote the article (October 2005), I wasn't aware of EFI. (That article got delayed MUCH longer than usual in the editing process, for reasons beyond anyone's control.)

Re:Why not EFI? (2, Insightful)

DeathPenguin (449875) | about 7 years ago | (#20929967)

EFI is a specification, not an implementation, where the core pieces are still controlled (And _never_ opened up) by vendors and is usually still a big wad of real mode assembly that nobody wants to touch. There is no 100% open-source EFI-compliant BIOS implementation. The specification alone for EFI is over 1,000 pages.

To top it all off, to even begin development on stuff like Tianocore you need to agree to draconian licensing terms such as: "You acknowledge and agree that You will not, directly or indirectly: (i) reverse engineer, decompile, disassemble or otherwise attempt to discover the underlying source code or underlying ideas or algorithms of the Software; (ii) modify, translate, or create derivative works based on the Software; (iii) rent, lease, distribute, sell, resell or assign, or otherwise transfer rights to the Software; or (iv) remove any proprietary notices in the Software." ( https://www.tianocore.org/nonav/servlets/LegalNotices?type=TermsOfService [tianocore.org] ).

So I guess your question is sort of like asking why people don't like to use proprietary drivers, even though there are some out there that work very well. The nice thing about Macs is that Apple seems to have went out of their way to make EFI invisible to the user. I don't trust that this will happen on most other pieces of hardware. The BIOS belongs out of the way, IMHO.

Treacherous Platform Module (1)

z0M6 (1103593) | about 7 years ago | (#20930093)

EFI does a whole lot more than what I want it to do. A whole lot more than it should do as well. Kind of the same reason I don't want anything to do with things that have a TPM.

Now if only we could have larger bios chips, then having a linux kernel on one makes sense.

Re:Why not EFI? (1)

rickb928 (945187) | about 7 years ago | (#20930203)

I've dealt with EFI on Itanium systems. Fast isn't the adjective I would associate with EFI.

In fact, until the damned OS boots, fast doesn't really go with Itanium. After the boot, yeah, sure.

Re:Why not EFI? (3, Insightful)

segedunum (883035) | about 7 years ago | (#20930223)

Because EFI is very much proprietary, and the subject of this article is Linux and Open BIOS.

EFI is also pretty broken. It tries to look better than BIOS, but really it isn't. Think of ACPI (Intel brain damage, as Linus Torvalds calls it) which looked good and looked like we'd get some standard interfaces.........and we didn't because hardware was too complex, it had quirks and everybody ended up doing variations on a different theme. EFI is the same, because of course, everybody's intellectual property has to be protected. I mean, we can't just have manufacturers downloading, installing and contributing to a standard Linux or OpenBIOS, because that would be too easy, it would make things work far too well and everyone would have wonderful boot times ;-). Maybe a motherboard manufacturer will bite the bullet and implement Linux or OpenBIOS when they realise how much better it will make their hardware, and how much cheaper it is without umpteen updates.

EFI is also an awful lot more complex than BIOS, which adds to the list of things to go wrong in terms of different implementations. At least the BIOS we have today is a boot loader - and it doesn't really pretend to be anything else (hell, you'd be crazy to try anything else with it!). Now think about how many BIOS updates we have for various boards today to fix lots of broken things, and then extrapolate that out........... It's not a pretty picture.

Re:Why not EFI? (1)

segedunum (883035) | about 7 years ago | (#20930289)

Additionally, I would add that because EFI is supposedly defining interfaces (if only it were that simple), Intel has crazy ideas of implementing drivers and shit - in EFI and the hardware! Just think of the bloody hassle we have today with drivers and hardware. Intel is crazy when it comes to these things.

Re:Why not EFI? (2, Informative)

TemporalBeing (803363) | about 7 years ago | (#20930977)

Why not use EFI?
Because it an UEFI - and its cousin that PheonixBIOS, which now seems to be defunct (can't find a reference to it) - are part of the Trusted Computing [wikipedia.org] /Paladium nightmare. if you want TPM to lock you out of your computer or tell you how to use your computer, than so be it.

I choose freedom.

The real problem with speed-booting (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#20929181)

Niggers, my friends, niggers!

Re:The real problem with speed-booting (-1, Flamebait)

Anonymous Coward | about 7 years ago | (#20930733)

Probably you've never seen nigger running from police...

I don't know (0)

Anonymous Coward | about 7 years ago | (#20929207)

It seems like the bulk of my boot time is not spent waiting for the BIOS but the operating system. In the case of Windows, its waiting for all of the (mostly unnecessary) background processes to launch. Removing legacy support in the BIOS may help a little, but I seriously doubt that it will significantly decrease boot time overall.

I wouldn't touch this! (4, Insightful)

schnikies79 (788746) | about 7 years ago | (#20929225)

As the subject states, I wouldn't touch this, unless it was an official release from my board manufacturer. With a bad install or software bug, I can just re-install, but a bad bios can hose the motherboard. I might try it if someone had it running on the exact same hardware, down to part #'s for the ram.

I'm admittedly not terribly bleeding-edge when it comes to hardware or electronics, but mucking with my bios is a no no.

Re:I wouldn't touch this! (1)

miscz (888242) | about 7 years ago | (#20929685)

I wouldn't do that if my motherboard wasn't fully supported but many modern motherboards have backup BIOS that can be loaded with proper jumper setting.

Re:I wouldn't touch this! (2, Informative)

DeathPenguin (449875) | about 7 years ago | (#20929827)

>>I might try it if someone had it running on the exact same hardware, down to part #'s for the ram.

Fortunately, you don't need exact matching hardware to recover from a botched BIOS update if you have a socketed BIOS chip. The flash memory your BIOS is stored on can be easily removed, placed in someone else's computer with a compatible socket (It can be a whole different architecture, even), and reprogrammed with the vendor's BIOS using Linux+Windows compatible utilities such as Flashrom ( http://linuxbios.org/Flashrom [linuxbios.org] ), vendor-provided flash utilities which usually run in DOS, or through Linux MTD. There are even services set up to do it for you ( http://www.badflash.com/ [badflash.com] ) if you don't have access to another mainboard with a compatible socket.

Unfortunately, BIOS upgrades can become necessary after purchasing a machine if only to support more advanced CPUs (Remember the transition to dual-core CPUs?), to get power management right, etc. The lesson: If you're worried about BIOS updates, buy a motherboard with a socketed BIOS.

Re:I wouldn't touch this! (1)

schnikies79 (788746) | about 7 years ago | (#20930735)

I have a desktop that I built, but I primarily use a notebook. I've yet to see a modern notebook with a socketed BIOS or even jumpered.

Disk-on-Chip Linux (4, Interesting)

RancidPickle (160946) | about 7 years ago | (#20929291)

If they could come up with a dedicated Linux Bios combined with a Disk-on-Chip setup, it would make an impressive little computer. Fast-on, perhaps with a drive or removable flash drive, and all updatable. It certainly could make an inexpensive box, and could be an ideal homework machine for the kids or a combo stand-alone box / terminal for offices. If the network went down, people could still work.

Re:Disk-on-Chip Linux (1)

TooMuchToDo (882796) | about 7 years ago | (#20929673)

And tying that together with Google(TM) Applications (and I do mean all of them: Search, Gmail, Apps, etc) means you would be able to run a thin client for a majority of day to day tasks.

Re:Disk-on-Chip Linux (1)

prencher (971087) | about 7 years ago | (#20929935)

Isn't this pretty much exactly what the Asus Eee PC [wikipedia.org] is?

Not needed. (2, Interesting)

WindBourne (631190) | about 7 years ago | (#20930897)

My Home server boots from a Sandisk 4G CF drive. Speedwise, it is blazing. The mounts are a bit different. / is on the drive, while /home, /opt, and parts of /var (such as /var/logs) are on HDD. Roughly, any directory that varies is put on HDD. Next year, I will buy another CF only it will be 8G. By then, the price will be much lower, and the speeds increased.

Benefits from experience (5, Informative)

vil3nr0b (930195) | about 7 years ago | (#20929305)

I have repaired clusters for the last two years and most have OpenBios. These are the likes: 1)Fast as hell!! 2)Easy to change options 3)Can mount the file to a disk, edit, and then replace. 4)Errors can be determined by watching console, No video needed. One serial cable, One laptop=priceless. 5)Free

/. news that mattered (0)

Anonymous Coward | about 7 years ago | (#20929387)

31 Aug 2006
Not only rehashed from digg and reddit, this article is from Aug 2006. :(

Open source and documentation (1)

Tim Ward (514198) | about 7 years ago | (#20929573)

From TFA:

Of course, the exact meanings of these codes vary from one BIOS to another, and only some vendors document them. Luckily, the open source vendors are very good about documenting them.

Wow!! - Who are these open source people who are "very good about documenting" beep codes?? Any chance of deploying them to all the other open source projects out there?? - they could sure do with the help!

Open BIOS is Mission Critical. (4, Insightful)

asphaltjesus (978804) | about 7 years ago | (#20929603)

Why? Well, Trusted Platform Computing needs to start on the BIOS level in order to maintain a trusted environment. If motherboard manufacturers actually move to an always-on TPM, then OSS developers may be locked out of newer hardware.

The mobo manufacturers will love the price versus commercial tpm and thereby limiting tpm deployment.

That's why getting involved with these projects in particular is essential to everyone who understands the importance of computing Freedom and overall innovation.

To save time: (4, Funny)

seebs (15766) | about 7 years ago | (#20929607)

No, I don't know that much about what's happened in the field in the year and a month or so since this article went up, a month or so after I wrote it. I've been busy.

Flash Hibernate (2, Interesting)

Doc Ruby (173196) | about 7 years ago | (#20929841)

I want my PC to hibernate to flash, storing an image that requires only the slightest update to reflect network state, time, and a few other counters. And all apps to store their state so they can be "rebooted" to flush memory leaks, but return to their highlevel state.

That would give instant-on that's great for mobiles, but also good for desktops. Why is that so hard? Isn't hibernating to flash with a little update a lot easier than rewriting the BIOS?

why the PC is so slow to boot (4, Interesting)

Skapare (16644) | about 7 years ago | (#20930169)

One major reason a PC is so slow to boot is the totally free-wheeling nature of attached devices. There's actually too much liberty to do bad things in device hardware. In some cases, probes to see if a certain specific device is present can cause some other device to go into a locked up state. PCs also have the complication that interrupts don't really identify the device in the same terms as how you access the device. This means we have to do things like timed waits in device probes. Ideally we should be able to discover all the devices in a computer within a millisecond for as many as 100 devices.

We need a whole new system level (as opposed to CPU level) architecture. We need to have a uniform device address range for all devices, and a uniform set of basic commands for all devices. Then all devices in the same class (storage devices are one class, network interfaces is another class, etc) to have a common set of commands to operate the normally expected functions of that device class.

And we really don't need a BIOS, or at least not much of one. A simple switch that lets us select between 2 flash areas to load at reset or power on would handle almost all cases. And even that's not necessary if we choose to run a stripped down boot selector program from flash that lets us select other flash areas to load. That combined with a hardware based "JTAG over USB" protocol to store new flash images when no present ones work (maybe when an on-mainboard or rear-access switch enables it) would provide any needed recovery capability.

And why can't we have gigabytes of flash? I bought a 2GB SD card the other day for $20. Can't they put that on the mainboard? An SD slot would not only provide for a lot of capacity (way more than what you get on a CDROM), but also a means to stop writing, and a means to swap out bad flash or reload it in another computer.

I have been working on a description document for a new architecture. It's not ready, yet, or I would post it here. But I'll try to speed it up.

Speed booting??? (4, Funny)

gstoddart (321705) | about 7 years ago | (#20930357)

Speed boot: (noun) What we water ski behind in Canada.

Thanks, I'm here all week. Try the veal. :-P

unneeded (0, Flamebait)

lukesky321 (1092369) | about 7 years ago | (#20930359)

I'll probably never use this. My computer(linux) has a very good uptime to down rate, so it is very rare that i'll reboot my machine this year. Even if their is a power failure, my UPS will kick in.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?