Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

New Phishing Toolkit Uses Whitelisting To 'Bounce' Non-Victims

rdebath Re:Need better security (71 comments)

As far as I can tell the OTP calculators are only issued for business accounts, normal "end user" accounts have minimal provisions. One example uses a user ID, a password (split into two entry fields) and the site displays a picture that you chose when you first activated the "web access" .

This isn't that secure and because a lot of their site is HTTP there's a good chance that "sheep" attack will work too.

about a year and a half ago
top

New Phishing Toolkit Uses Whitelisting To 'Bounce' Non-Victims

rdebath Need better security (71 comments)

It looks like banks and gov departments can no longer be trusted as normal web sites. They have to be setup to be only available through SSL and must use client certificates for authentication with some way of verifing that the server certificate matches the client certificate.

Only then could the software (possibly a custom configuration of a web browser, maybe an normal one) actually be sure of defeating a phishing attack.

Of course the main reason it'd work is that with a client certificate there's no password to "phish" for.

Something tells me that the banks are too lazy to do this; every other web site will have to be SSL before they get on the bandwaggon.

about a year and a half ago
top

Should Microsoft Switch To WebKit?

rdebath Re:Arguments of convenience (244 comments)

I didn't say there weren't some disadvantages. As I understand it the perl thing is something of a documentation project so they explicitly specify what are the expected responses and what are artefacts of the implementation that the programmers aren't expecting. An example in C

x = 1;
x++ = x++ + x++;
printf("%d\n", x);

... What's the value of "x" ? Your implementation will give you one number, a different implementation may give a different number; the standard says both the implementations are right because the code is outside the standard even though the compilers accept it.

Once you have the documentation (standard) there may be better ways of coding a program that does what the standard says (but not necessarily exactly the same as the first implementation). For example in the perl case; how about a complier rather than an interpreter, would it run faster? With the standard in place you have a fixed target and (in theory) a test suite to check your implementation against the standard. Without the standard you only have the first implementation to compare against; with any significant program you will have differences ... but are they significant differences, there's no way to tell.

Unfortunately, the web standards are so weakly specified that they don't really supply this advantage. Browsers don't have to throw any errors, they don't have to have "validation modes" and there's no way for an older version to identify code that will work in a later version. So the result is that most web pages are "outside the standard" which means that the browsers can do what they want with them. Hopefully this is better with HTML5 though.

about a year and a half ago
top

Should Microsoft Switch To WebKit?

rdebath Re:Arguments of convenience (244 comments)

Not "multiple choices", he said "ALL choices".

That includes the super OS that's going to be released next year, the funny phone with the top secret library APIs and win32s on wfw3.1 (if someone cares to spend some time).

about a year and a half ago
top

Should Microsoft Switch To WebKit?

rdebath Re:Arguments of convenience (244 comments)

As I remember it Microsoft jumped the gun on the standard with IE6 and made a guess as to which way the (ambiguous) draft standard should be interpreted. The committee went the other way.

After the release of IE6 there was some political stuff that basically meant that Netscape wasn't released again. Without the competition Microsoft didn't care and so IE6 froze too. Only when Mozilla actually threatened to get a larger market share than IE6 did Microsoft actually start updates again.

All through this the standards committee were still working (mostly without Microsoft's input) in their normal (slow) fashion.

The 'pissing contest' method is actually not a bad way of showing your users what you think they're asking for, normally it's called 'prototyping'. The important thing is to have a solid line between the parts that are 'stable' and the parts that are 'prototype'; in this case it's the "-webkit-" and similar prefixes. For IE6 there wasn't one.

about a year and a half ago
top

Should Microsoft Switch To WebKit?

rdebath Re:Arguments of convenience (244 comments)

I would strongly disagree with this.

Having a standards committee design the next step in a technical advance is one of the worst ways of working possible. What you usually end up with is a huge conglomeration of random ideas and special interests. For programming the result is frequently described as "feeping creaturitus".

The reason for web standards is not technical, standards don't help make better mousetraps they exist so that a hundred mice can wrestle the cat into submission. So that the little guys can make stuff too and they don't get forced out of the market by a brute who can throw either money or lawyers around to kill off the competition.

If webkit became "the web browser" this would be no different from (for example) the single source of the Perl language. There wouldn't be the problem of the secret Trident, where nobody can compete or the technology can be politically leveraged to for the use of other software (eg an OS). Because, being freely forkable, if the current maintainers don't support an environment patches against the source can be added by others. What's more if the maintainers make enough of a fuckup they can be force out completely.

But there is a problem for Microsoft; several years ago they claimed that IE was an essential component of their OS and they very hurriedly tried to make sure that this wasn't a complete lie. Because of this lots of parts of the OS now use DLLs and libraries from IE to do simple jobs or use IE as a local display processor. The result is that Microsoft will have a difficult job removing IE and it's html engine, so much so that it's probably easier for them to fix IE than to navigate the maze of interdepartmental politics that would be involved in removing it.

about a year and a half ago
top

Death of Printed Books May Have Been Exaggerated

rdebath Re:Nostalgia (465 comments)

I think you're looking at this from the wrong side.
It's not the size of a book that's fixed, it's the size of the ereader that's fixed.

If you have a thousand random books there will be a large percentage that are 'paperback sized', a few will be 'oversized paperbacks' , those ones that are always a pain on the shelves. But the rest are random sizes of anything from 'C' size to huge. For books the shelves are a problem, but there's not much downside to having a few different shelves for the wrong-sized books.

For an ereader you only have one screen size. As you've noticed it's usually about paperback sized because that's convenient for books that are just words; no pictures, no tables of numbers, just 5-12 words per line like paperbacks or newspapers. But as soon as a book starts adding pictures they're forcing a minimum page size, if that's larger than screen then you have a problem.

The answer is obvious, larger screens for the larger pictures ... but that makes the 'ereader' too big for a pocket, too expensive, not really an ereader any more.

Oddly enough the writers of the original series of Star Trek noted this problem in that there have always been at least two forms of the "PADD" or "hall pass", the handheld style and the clipboard style.

Looks like someone needs to make a "big screen" version of their ereader; identical (and so sharing the development budget) as possible to the "little screen" but sized A4ish. Or perhaps an A4 screen that you can attach the ereader to the back of.

about a year and a half ago
top

Death of Printed Books May Have Been Exaggerated

rdebath Re:Information density (465 comments)

Sticking fingers in as bookmarks ... okay, that sounds more like having the book "open" and "on the screen" more than once. That's not something that works with the current flock of ereaders, but it does work with ereader software on a large screen PC.

I suppose it comes down to 'rapidly', that flash and redraw of the "e-ink" style screens is slow, the processors may be fast enough, but the software they're running is written for machines with ten times the clock rate or ten times the memory, so the software is slow.

Sigh, I think my next ereader better run Linux or Android. At least then I may have a chance for it to get fixed.

PS: Searching is nice; bookmarking searches is even nicer.

about a year and a half ago
top

Death of Printed Books May Have Been Exaggerated

rdebath Re:Information density (465 comments)

I would disagree with your downsides.

Tactile nature
I'd say this is an upside for the reader. I can put the book down without being forced to find a bookmark. My ebook reader is a thinner than a paperback so it actually fits in pockets and so forth.
Bookmarks
The reader does bookmarks it seems to have an unlimited number and they're easier to label. You can even have bookmarks outside the book eg: 'hyperlinks between books'.
Flicking through
You can hit the 'next page' pretty quickly and jump to 24% through the book or 30 pages back very easily. Of course you don't have the odd rippling sound ...
_

IMO, the major downside of an ebook reader is that it has ONE page size. So any document that forces a page size or a page width has a problem. This obviously includes 'PDF' type documents but will also include HTML documents that use features later than about HTML3 and even text documents that assume a screen is 80 columns (or worse "800 pixels of Arial 10 point" ) wide.

Obviously this can be worked around if the reader's page size is "large" (eg possibly A4 sized) and this is what happened with HTML4+ on a computer screen, but that kinda defeats the idea of an ebook reader.

This leads to the second downside; unless you're very careful, your ebooks will die when the reader does either because they become unreadable because of format change or (worse yet) DRM kills them.

about a year and a half ago
top

Death of Printed Books May Have Been Exaggerated

rdebath Re:Love my kindle and my Nexus 7 (465 comments)

All you really have to realise is that the DRM thing is a con.

Those people claiming that DRM software can stop anyone getting a non-DRM copy are wrong. DRM can do two things

  1. Make it more difficult for a "legitimate" user to get at the data
  2. Prevent EVERYONE, including "legitimate" users accessing the data.

For the people that DRM is supposed to stop, one of them has to do a little work. All the rest have an easier time than any "legitimate" user.

DRM on Ebooks is actually one of the easiest to break; the data rate is so low compared to audio or video that the "analog hole" is a very reasonable way of un-DRMing the data.

So the correct solution for you as a user is to buy a "DRM copy" so that you're a "legitimate" user and then download the "pirate" non-DRM version to actually use. Please don't forget the first step ... um.

about a year and a half ago
top

'Hobbit' Creates Big Data Challenge

rdebath Re:Frame rate shouldn't matter (245 comments)

Check out Fractal compression; for mpeg compression the increase in frame rate and resolution will increase the size of the compressed video. But for the 'fractal' compression the final stream is resolution and frame rate independent.

Unfortunately, the algorithms were put under patent in the US and the holder of the patent made the licensing terms too onerous. So very few compressors and decompressors were written in software and (unlike mpeg) hardware assisted encoding and decoding never happened.

As this work was done in the late eighties and early nineties the patents are expiring; so hardware encoding may now become cost effective.

about a year and a half ago
top

Ask Slashdot: Do You Test Your New Hard Drives?

rdebath Re:Badblocks/Shred (348 comments)

If you do FDE you don't want to use badblocks --random, It creates ONE random block and writes it out repeatedly.

I find one of these is better..

  • Testing the disk four writes and four reads.

    testdisk () {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    cryptsetup create towipe $1 -c aes-xts-plain -d /dev/urandom
    badblocks -svw /dev/mapper/towipe
    cryptsetup remove towipe
    dd bs=512 count=1 if=/dev/zero of=$1
    }

  • Fast write to true random usually runs at full disk speed.

    wipedisk () {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    dd bs=512 count=100 if=/dev/zero of=$1
    cryptsetup create towipe $1 --offset 1 -c aes-xts-plain -d /dev/urandom
    dd bs=1024k if=/dev/zero of=/dev/mapper/towipe
    cryptsetup remove towipe
    }

  • Alternate full speed random wipe, sometimes faster.

    wipedisk() {
    [ -e "$1" ] || { < "$1" ; return; }
    hdparm -f "$1" 2>/dev/null ||:
    openssl enc -bf-cbc -nosalt -nopad \
    -pass "pass:`head -16c /dev/urandom | od -t x1`" \
    -in /dev/zero | dd bs=1024k > $1
    dd bs=512 count=1 if=/dev/zero of=$1 2>/dev/null
    }

The end result is a drive filled with true cryptographically random data completely indistinguishable from an encrypted drive, because it is an encrypted drive!

about a year ago
top

How the Internet Became a Closed Shop

rdebath Re:Related Anil Dash Blogs and earlier /. discussi (206 comments)

The VM you're describing IS java or silverlight (ie: msjava) or flash.

The problem always seems to go back to deep linking and scraping. So what if your VM runs wonderfully and displays everything perfectly to the user on a quad core processor with a dual slot GPU. If the search engine can't work out where you should be in a search list you'll never get any visitors. And search engines are dumb, small and dumb, no GPU either. Then if you have only one 'link' to your site, even if the search engine were able to index everything you'd get a vague list of things that are sort of near to the url the search engine can give. Even if you have a person creating a reference, without a deep link, direct to the interesting bit, nobody would bother.

HTML is an ugly overstressed framework, javascript is brutalised by the libraries and CSS is just crap. But even if the combined language were made perfect it wouldn't last. The current web is the bastard crossbreed that's needed to serve conflicting masters, the masters would still be there trying to rip your 'language' apart, make it perfect for just one tiny slice of the problem.

I don't know what the solution is; hopefully HTML5 will help more than it hurts.

about 2 years ago
top

Microsoft Complains That WebKit Breaks Web Standards

rdebath They're all wrong! (373 comments)

border-radius: 15px;
-moz-border-radius: 15px;
-ms-border-radius: 15px;
-o-border-radius: 15px;
-webkit-border-radius: 15px;

Look at that mess! It's not what the web developer is trying to say. This is more like what they want to say ...

border-radius: 15px;
-w3cnext-border-radius: 15px;

The web developer is wanting to use the expected behaviour of the next css standard. This prefix says that, or perhaps a "-css4beta-" prefix so we don't get caught by css5beta. IMO the web browsers should be saying what they are trying to provide not just that "this isn't the current version, it's mine".

I am NOT saying that the "-vendor-" prefixes should go away, but just that when it becomes pretty much certain that a particular change should be in the next standard it goes into the extra prefix. That prefix becomes what would be in the standard if it were ratified tomorrow.

At that point nobody cares how slow the W3C is.

about 2 years ago
top

Ask Slashdot: Best 32-Bit Windows System In 2012?

rdebath Use mixed Linux + Windows mode. (313 comments)

Currently I would not recommend installing an old 32bit Windows on new hardware. Reasons include (1) Complete lack of drivers for some hardware and (2) Hardware (eg ahci) that has to work in slow compatibility modes.

Assuming your application runs on it I would suggest a Windows 2000 in a VM, with the guest given about 2GB of private memory (or just under). This is because (1) Windows 2000 is still very light on modern (or nearly) hardware. (2) It's compatibility is very good with both the 9X versions and the later XP and W7 so most (non Microsoft) programs will run. (3) It as a reasonable dos box and good 16bit support. (4) using the "270" hack you are not going to have any problems with license keys or activation servers going offline.

For the VM host I would suggest a 64bit linux using KVM as the virtual environment. I would NOT recommend using a 32bit version of the Linux kernel because the caching will not be able to use all of memory (despite it being available to applications) and the VM guest will be limited to 2GB without any choices. This is not the simplest of virtual hosts to work with but it does have very good performance and very wide hardware support. In addition with the correct choice of distribution it will be a very light host in terms of disk and memory overheads.

OTOH, if you just want something simple use Window 7's XP mode. (or perhaps Win2k in a Virtual CP)

about 2 years ago
top

NASA DTN Protocol: How Interplanetary Internet Works

rdebath Usenet to the planets. (109 comments)

Forget all this talk of UUCP, Fido and normal packet protocols, the closest current similarity is sending binaries over usenet.

The most important part is the delay time, when you 'launch' a usenet message you won't receive anything at all from the remote end for a very long time. It will probably be long enough for you to transmit the entire message and then some.

The medium also has some limitations ...

  • you can't send a 'message' over a few (hundred?) kilobytes, still small, but a lot larger than a single packet.
  • The medium is unreliable, message will get corrupted or lost.

For usenet the binary files are packaged up into one archive them split into messages. Usually something isn't considered to be received until the entire archive has been received intact. It used to be that the receiving end would request repeats of messages that didn't get through. This takes a long time and wasn't simple to automate because of the multiple receiver nature of usenet. Nowadays more messages are added using the 'parchive' protocol the idea being that the extra messages are 'universal substitutes'. Say the transmitter needs to send out an archive of 1000 messages, furthermore it's likely that 4%-9% of messages will be lost, then adding 100 extra PAR messages will (normally) mean that the archive will get there intact first try. No retransmission request needed.

I expect 'bp' is very similar.

about 2 years ago
top

Ask Slashdot: Securing a Windows Laptop, For the Windows Newbie?

rdebath Re:Windows for Linux users, advice (503 comments)

Some minor notes here...

1. Windows 7 on a new laptop.

IMO a new laptop is not essential; BUT it must be 'Windows Logo' for Vista or later otherwise Windows 7 will use a rubbish unaccelerated frame buffer video driver.
Also I would make sure you use the 64bit version of Windows; it's a slightly more hostile environment for malware.

3. Create a regular user account ...

This is good idea; but treat it as a 'best practice', give him both passwords. After all we have here a 12 year old with some skill at Linux. He has physical access to the machine so he already has higher access than Windows Administrator. If all else fails he can take a screwdriver and move the hard disk to another machine.

5 Backup the machine ...

Lots of tools for this: One I like is http://www.drivesnapshot.de/en/index.htm it has a linux restore option so you only have to do a PXE Linux boot and restore the image from the network. In addition it does Differential Disk Image backups; something that most Image backup makers claim is impossible. All this using VSS from the running Windows installation and you can initially store the backup files on the same disk you're backing up. (But don't forget to clone the boot partition too).

But if I'm only doing a one off backup (day Zero) I'll use the Linux tool "ntfsclone" (from ntfsprogs). For Windows 7 you need to copy both partitions and dd(1) the first megabyte of the hard disk to a file.

BACKUPS. I really cannot say this often enough, You will have to restore the machine at some point and you will have to roll back the windows install to day zero. This is not like Linux where you can reasonably upgrade the filesystem through 15 years of changes and still have a fast and clean system. There is no package manager. Windows programs depend on install and uninstall scripts and they are very rarely complete or consistent. They break things, they leave debris behind, and game installers tend to be the worst of the bunch. They not only have "mistakes" in them they have intentional "anti piracy measures" and "DRM" which can never leave the system because that would let you reinstall the game for another 20 day teaser session.

Even that "drive snapshot" program leaves a single registry key behind, insignificant on it's own, but some applications leave hundreds and this machine will have lots of installs and reinstalls. Remember the Microsoft 3 R's ... Retry, Reboot, Reinstall.

about 2 years ago
top

Ask Slashdot: What Distros Have You Used, In What Order?

rdebath Very simple... but long... (867 comments)

  • SLS 1.02
  • SLS 1.02 + Manual updates
  • Inplace manual upgrade to Debian Bo
  • Debian Hamm
  • Debian Slink
  • Debian Potato
  • Debian Woody
  • Debian Sarge
  • Debian Etch
  • Debian Lenny
  • Debian Squeeze & Ubuntu

All the upgrades have been on a single filesystem that's been upgraded and transplanted from machine to machine. Some secondary machines have had other copies of Debian and an occasional other distribution (but never for long). The Ubuntu (on a little laptop) is just Debian enough that I don't replace it.

Parts of the home directory started life on a SCO Xenix machine with honest timestamps back in 1989. A few files are dated before that but they are generally DOS backups and files that have lost their timestamps for one reason or another.

about 2 years ago

Submissions

rdebath hasn't submitted any stories.

Journals

rdebath has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>