Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



NVIDIA Unveils Lineup of GeForce 800M Series Mobile GPUs, Many With Maxwell

Jamie Lokier Re:Linux (83 comments)

Thanks! But too late. That machine died this time last year, after 6 years of excellent service. I moved on to new hardware.

Hopefully the xorg.conf is useful to someone else.

I've just looked up what people are saying about DebugWait, and I see the font corruption - that's just one of the types of corruption I saw!
But perhaps that was the only kind left by the time my laptop died.

Just a note to others, that DebugWait doesn't fix the font corruption for everyone according to reports. But, it's reported as fixed by the time of the kernel in Ubuntu 13.04 according to

I stand by my view that Intel GPU support never quite reached "excellent" because of various long term glitches, although I'd give it a "pretty good" and still recommend Intel GPUs (as long as you don't get the PowerVR ones - very annoying that was, that surprise wrecked a job I was on). Judging by the immense number of kernel patches consistently over years, it has received a lot of support, and in most ways worked well.

Getting slightly back on topic with nVidia: Another laptop I've used has an nVidia GPU, and that's been much, much worse under Ubuntu throughout its life, than the laptop with Intel GPU. Some people say nVidia's good for them with Linux, but not this laptop. Have tried all available drivers, Nouveau, nVidia, nVidia's newer versions etc. Nothing works well, Unity3d always renders ("chugs") about 2-3 frames per second when it animates anything, which is barely usable, the GPU temperature gets very hot when it does the slightest things, and visiting any WebGL page in Firefox instantly crashes X with a segmentation fault due to a bug in OpenGL somewhere, requiring a power cycle to recover properly. So I'd still rate nVidia poorer than Intel in my personal experience of Linux on laptops :)

about 6 months ago

NVIDIA Unveils Lineup of GeForce 800M Series Mobile GPUs, Many With Maxwell

Jamie Lokier Re:Linux (83 comments)

Now? Intel GPU support has been excellent under Linux even back when the crusty GMA chips were all we had.

Except for the bugs. I used Linux, including tracking the latest kernels, for over 6 years with my last laptop having an Intel 915GM.

Every version of the kernel during that time rendered occasional display glitches of one sort or another, such as a line or spray of random pixels every few weeks. Rare but not bug free.

And that's just using a terminal window. It couldn't even blit or render text with 100% reliability...

I investigated one of those bugs and it was a genuine bug in the kernel's tracking of cache flushes and command queuing.
In the process I found more bugs than I cared to count in the modesetting code.

Considering the number of people working on the Intel drivers and the time span (6 years) that was really surprising, but that's how it was.

about 6 months ago

FSF's Richard Stallman Calls LLVM a 'Terrible Setback'

Jamie Lokier Re:Precisely (1098 comments)

In addition to what others said about the FSF discouraging the LGPL, it is also not allowed to statically link LGPL code to non-(L)GPL closed code. You can only link dynamically unless you provide full source.

Nonetheless, statically linking with LGPL libraries in the form of uClibc is _extremely_ common in commercial devices running uClinux. Without providing any way to relink. Forbidden, but ignored.

about 8 months ago

Current Radio Rules Mean Sinclair ZX Spectrum Wouldn't Fly Today

Jamie Lokier Re:How are mobile phones legal then? (64 comments)

As the AC implies, that's not interference from bad or unshielded electronics in the mobile (or it shouldn't be).

An ideal mobile transmits only what it's supposed to, on the correct RF channels to communicate, and nothing else.
Like all devices there will be other emissions, but let's assume it's very well made and effectively perfect.

The sound on the speakers is because the speaker circuit is effectively an RF receiver, converting those high frequencies to audio. They actually demodulate the signal - unintentionally. See

If the speaker circuit is made well enough, it won't do this.

about 2 years ago

Microsoft In Talks To Buy Nokia's Smartphone Division?

Jamie Lokier Re:Apple? (192 comments)

I had exactly the same thing happen (audio stopped playing until reboot, so phone ring was silent) on Nokia Symbian S60 phones years ago.

It's crappy but it's not exclusive to Windows phones.

more than 2 years ago

How Doctors Die

Jamie Lokier Re:The Sanctity of Life (646 comments)

We stagnate unless we choose not to. You don't *have* to become a stereotypical angry old conservative. That's up to you. I choose not to.

more than 2 years ago

Microsoft Issuing Unusual Out-of-Band Security Update

Jamie Lokier It pleases me that Perl isn't listed as vulnerable (156 comments)

Because Perl switched to a better hash function _and_ randomised it ages ago.

Having looked at many different fast hashing functions, I'm amazed at how many in the vulnerability report are still using the ancient multiply-by-small-constant and xor/add. That sort of thing tends to need a prime hash table size and a slow 'mod' operation. We have better hash functions that work on a 2^n table sizes.

more than 2 years ago

Fake Raspberry Pi Shops Pop Up

Jamie Lokier Re:Watch out for the cheap knock-offs (119 comments)

If it's anything like the chinese knockoff of the Nokia N900, it'll look identical (right down to the logo) but be completely different and relatively useless.

more than 2 years ago

Raspberry Pi PCB Layout Revealed

Jamie Lokier Re:What a wonderful project! (112 comments)

The bit about my own history was just to illustrate that young people (the target audience for RP apparently) do take an interest in that sort of thing, not to suggest a method! Of course nobody would use that approach any more! (The Elite reference was because David Braben co-authored Elite and is also involved in RP).

If analysing the blob statically, and if you know the instruction architecture, we have much better tools now, including disassemblers, decompilers, type inference and much more. And internet so we can collaborate better.

16MB is a big blob, but it's highly unlikely that much of it is needed to make a useful open source subset of the functionality.

For perspective on speed: Recently I had to reverse engineer about half of a 1.5MB ARM driver blob in some detail, enough to fix bugs and improve performance deep within it. I'm not going to say what it was, only that it took me about 2 weeks with objdump and some scripts, not using more advanced tools. I didn't enjoy it because it was just to fix some bugs the manufacturer left in :-/ (The best bit was a one-bit change that tripled video playback performance and stopped it stuttering :roll-eyes:)

But there may be a big fat license prohibiting anyone from openly using the results of that type of deep code analysis on the RP's blob.

Plus, there's the secret GPU/RISC architecture to get to grips with; that's not going to be obvious.

So it would probably have to be Nouveau-style: Run the original, watch its interactions with the device (with tracing probes), replay things, change things randomly, try things, gradually build up a picture through guessing as much as anything. That's a much bigger task than statically analysing a blob's code. (At least, to me it seems so.) I don't know whether it's practical on the RP, and I don't know whether it's too difficult. But it worked with Nouveau - and that now supports a lot of nVidia chips - so not to be dismissed as impossible.

You never start all over after a chip rev. That's why they call them revs, not new architectures. You can diff code in blobs if need be; often the changes for a chip rev are very small.

You may be right about needing a lot of 11-year-olds (or others). Luckily the RP is cheap and interesting enough, that it might attract enough interest.

The suggestion isn't all that serious, but nor is it an impossible task, so I think it's worth floating the idea around, see how much interest there is in at least looking further at the practicalities and legalities.

more than 2 years ago

Raspberry Pi PCB Layout Revealed

Jamie Lokier Re:What a wonderful project! (112 comments)

all the software is "open" yet obfuscated

The entire Raspberry Pi depends on a gigantic proprietary blob from Broadcom.

So let's do a Nouveau-style reverse engineering project. How hard can it be?

Sounds like a perfect project for the target audience: curious and talented kids. With a bit of experienced help if they get stuck (seems unlikely to me though, with sufficient time & motivation). Some kids love reverse engineering. I did when I was young and I was far from the only one (but we didn't have an internet to meet each other back then).

(I did loads of reverse engineering from about age 11+ (that was 1983), starting with the BBC and moving on to everything I could get access to, pulling apart games (starting from the binaries), changing behaviours, porting them from tape to floppy disk ;-), even porting them to new architectures, and now I think about it, quite a lot of hacking on video hardware of the time, both in hardware, and quirky programming to make it do useful things it wasn't designed to do. If Mr Braben is listening, I printed a whole disassembly of Elite, BBC disk version on dot matrix that took days to print (wow just got a flashback), and spent a long time learning from its algorithms, some of which I still use today - thank you ;-) )

more than 2 years ago

10k Raspberry Pi Units Available In December

Jamie Lokier Re:I want more than an arduino(s) (123 comments)

These days there's plenty of intersection between embedded control (with GPIOs, I2C etc.) and driving some kind of display.

At the moment, for those applications at low volumes (1000), Raspberry Pi is the only thing I've seen at a competitive price. Everything else - including mini/nano-ITX PCs - are either way too expensive, or lack good video by current standards, or (thinking of STB chips) you can't get the parts without 10-100k volumes, a high initial fee, a big fat NDA, and very buggy drivers/SDK (been there...).

I too am sad that there's not a lot of chip data. I will be getting some Raspberry Pis to trial applications on, but also testing absolutely everything I need to use on it before ordering in quantity. Never trust a manufacturer's specifications - and never trust drivers you can't fix yourself without *lots* of testing. Especially where video is concerned.

It's kinda weird that they can sell them for less than comparable components can be easily bought for, but kinda wonderful compared with everything else out there, if it works as well as they say. I wonder if the low price will really last. And I wonder how long before someone starts a Nouveau-style GPU reverse engineering project ;-)

more than 2 years ago

Synaptic Dropped From Ubuntu 11.10

Jamie Lokier Re:Install (360 comments)

Fair enough.

I use aptitude, both from command line and in system building scripts, and prefer its command line options. Some of the options are unique and handy ("aptitude why"), but there are a few nasty things about it: the way it is extremely slow to do anything (like "aptitude unmarkauto foo"), even if you are queuing up a sequence of changes; even "aptitude search" is slow ("apt-cache search" gives more results and is instant); the aptitude man page is basically out of date and missing important information (it just tells you to read the manual, and you have to find out that's in /usr/share/doc/aptitude); and worst of all, on a system where people have inconsistently used a mixture of "apt-get" and "aptitude", something about the APT state regarding manually/automatically installed packages, combined with aptitude's notion of queued up operations, can get quite muddled, and a subsequent dist-upgrade can sometimes do very strange, bad things.

Both use the underlying APT framework, but dressed in slightly different ways that unfortunately go beyond just how things are invoked and presented.

It would be nice if they'd integrate the states better to be the same for all APT-using programs, integrate the config options (some options have the same name in apt-get and aptitude; others are different, and of course neither are listed fully or accurately in their respective man pages), improve "aptitude search", make it run faster (especially when just querying), and move the curses UI to a separate program so that aptitude really could be an always-recommendable replacement for apt-get and apt-cache. I've admin'd systems where I have to be careful to use the right one of apt-get or aptitude for that system as the other seems to behave weirdly (both ways); that's not nice.

I'm surprised Debian's recommending aptitude as the definite thing to use while it still feels like a work in progress.

Sorry, you may sense I've butted heads with aptitude a few times :-)

more than 3 years ago

Vint Cerf Says Fix the Net With More Pipe

Jamie Lokier Re:Bandwidth fixes don't fix latency problems (341 comments)

Actually if you make the bandwidth 100x the amount actually being used, then variable latency and quality cease being problems. In some ways, keeping pipes with excess bandwidth is the simplest engineering solution to what are otherwise rather complicated problems (QoS, negotiation, timing, congestion, neutrality etc.).

more than 3 years ago

Vint Cerf Says Fix the Net With More Pipe

Jamie Lokier Re:Just wave that magic wand (341 comments)

Over-the-air HD video is up to 19 megabits per second, so the equivalent download would require a 4.6 gigabit/second link (at the end-user side; the server side would have to be many times that).

Peer to peer, like Bittorrent. No need for the bandwidth to concentrate linearly at the server.
There is no good reason why the upload bandwidth can't be high as well, even if it's not as high as the download speed.

It would also require some type of storage device that can handle 570 megabytes per second, which is an order of magnitude faster than current hard drives.

But not for long, they're at roughly 100 megabyte/s now (multiply up for RAID), and some SSDs are faster. Anyway, if you're only downloading 8GB, that'll fit comfortably in RAM by the time the links are rolled out.

more than 3 years ago

Vint Cerf Says Fix the Net With More Pipe

Jamie Lokier Re:Makes sense... (341 comments)

I make that closer to 50Tbit/s for the two video panels.
But why so old-skool?
120Hz is already out of date. Let's play with 300Hz. TVs claim more, it's all same order of magnitude though.
Decent uncompressed holography, 200nm pixels, is about 125kPPI. Let's stick with 32-bit, only need one channel though.
Something you can point a telescope at and still see the details.
And you obviously want a holo-video wall in each bedroom for chatting, not a mere window.
Let's call it 8 feet by 12 feet, or 30,000 square inches per person.

I make that a cool 28 x 10^18 = 28 million Tbit/s = 28 Ebit/s, per person, for home use, if you don't compress.

More up-market houses will want a dedicated holo-conferencing / work-at-home room, and of course pictures of the sky on the ceilings as well as other decorative surfaces. So there's still a market premium for Zettabit links.

That's nice for chatting, parties, pretty sky pictures etc. but anyone doing scientific or computational research at home will want a proper pipe for their off site backups.

(Obviously we would compress all the above heavily, but that's harder to evaluate.)

more than 3 years ago

There Oughta Be a Standard: Laptop Power Supplies

Jamie Lokier Re:magsafe fuckers (482 comments)

Why don't you just read it unplugged, and plug it in when you put it down and go to sleep? Is your laptop battery that far gone that it can't last however long you're reading in bed?

(a) My housemate's MacBook is permanently plugged in because the battery lasts about 10 minutes now.
I'm not sure how old it is, but as it's completely fine at web browsing including video, there is no reason to replace it.

(b) Some people read in bed for many hours.

more than 3 years ago

There Oughta Be a Standard: Laptop Power Supplies

Jamie Lokier Re:Magnetic connector with strain relief (482 comments)

Indeed both my laptops needed the power connector resoldering.

I've also seen not one, not two, but three power supply cables fail at a strain relief point.
Two of them with large sparks that could have caused a fire if they'd been resting near the wrong surface.

more than 3 years ago

There Oughta Be a Standard: Laptop Power Supplies

Jamie Lokier Re:Mod summary up! (482 comments)

A cell phone isn't going to source power to anything. My PDA isn't going to source power to anything[...]

You don't anticipate them supplying power to the USB peripherals (memory sticks) you plug into them?

[...] My computer isn't going to source power to anything (via the charging jack). [...]

It would be a nice feature if it could, so you could feed one device from another's battery (I do that a lot charging my phone from my laptop when travelling), but I agree it almost certainly won't happen - at least not with the standard under discussion, which is complex and yet not sophisticated in the way USB

Yes, a full-blown USB connection needs to have a smart communication system so the devices can tell the host what they are and etc. No, a device built to charge through a USB connection doesn't need to communicate shit, all it needs to do is see 5V on the input and assume that it is connected to something that will limit the current it will provide if necessary. That means you can use any 5V supply to charge it, whether that is a laptop, a battery, or a wind turbine.

I often charge my cell phone from my laptop, as the phone charges over USB. The phone can also be a USB master. For reasons of bad implementation this phone can't power USB peripherals so you have to use a stupid power+USB splicing cable, but this should change if.when they make a phone that gets it right. That'd be a device charging over USB that needs to negotiate power direction.

more than 3 years ago

Synaptic Dropped From Ubuntu 11.10

Jamie Lokier Re:Install (360 comments)

So why do nearly all Debian documentation examples still say "apt-get" instead of "aptitude"?

more than 3 years ago


Jamie Lokier hasn't submitted any stories.


Jamie Lokier has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>