Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Jolla Crowdfunds Its First Tablet

Lemming Mark Re:What? 64-bit? (56 comments)

(I realise that still assumes there's enough memory for the applications to usefully run, which was at least part of your original point)

about 2 months ago

Jolla Crowdfunds Its First Tablet

Lemming Mark Re:What? 64-bit? (56 comments)

For the default Linux kernel settings, with anything approaching or exceeding 1GB of RAM you can actually get a benefit from more address space. The kernel only maps 1GB by default because of the restrictions of a 32-bit address space - and some of that 1GB is taken up by devices, rather than actual memory. The result is that the kernel has to create temporary mappings to access process memory. With a 64-bit system the kernel can keep it all mapped, all the time.

My comment applies to x86 specifically - other architectures will not necessarily have the same cost / benefit tradeoff. Also, there have been options for the kernel that allow it to map 2GB (with a reduced 2GB address space per process) or 4GB (which will be at a performance cost) - they're not often used but in a more appliance-like device (i.e. nobody is going to plop a load more memory in later and change the cost/benefit analysis) such as this they may also be a viable option.

about 2 months ago

Intel Claims Chip Suppliers Will Flock To Its Mobile Tech

Lemming Mark Re:If I remember correctly... (91 comments)

The first Itaniums had x86 compat in hardware and were, I believe, disappointingly slow at executing x86 code. Obviously that's something that Intel could have improved if they applied themselves to the problem (and maybe they'd have made it faster if they hadn't been expecting / hoping / planning to replace x86 anyhow).

But given the different philosophies of the architectures, I think it's somewhat plausible that doing an x86 -> Itanium conversion in hardware is just a bit awkward and that software might genuinely give the flexibility to do a better job. Around the same time, Transmeta were selling their chips that exclusively exposed a software-emulated x86 layer for use in laptops. I remember wishing Intel would buy their tech and apply it to Itanium / x86 compatbility.

about 2 months ago

Apple A8X IPad Air 2 Processor Packs Triple-Core CPU, Hefty Graphics Punch

Lemming Mark Re:I don't really see the point. (130 comments)

Apple seem to be pushing their mobile CPUs forward quite fast - they're also way ahead of the curve in adopting 64-bit ARM. I wonder if there's a longer term strategy to start migrating devices like the MacBook Air over to their A-series CPUs, instead of Intel. That could tie things together quite nicely for them.

about 3 months ago

A Warm-Feeling Wooden Keyboard (Video)

Lemming Mark Re:That... looks... horrible. (82 comments)

Maltron keyboards are kind of crazy - they're still made using very low volume manufacturing techniques. The keyboard shells, AFAIK are vacuum formed and (unless things have changed recently) I think they do manual point-to-point wiring on the switches. But if you look at the sculpted shape of a Maltron, they don't lend themselves to conventional PCBs.

I'm typing on one now - I think it's quite an old one but it looks as though the design changes are mostly smallish refinements and updates to the controller / electronics. I got mine from an office clearer on eBay, otherwise they've very expensive and I probably wouldn't have got it.

I've also got a Kinesis, an ergo board which came later (and with a strikingly similar design). It feels a bit more like a slick, mass-manufactured product but I've known people insist that the Maltron is ergonomically better overall. I'm not so fussy, I'm just glad I got two cool keyboards for prices I felt I could afford!

about 6 months ago

HP Unveils 'The Machine,' a New Computer Architecture

Lemming Mark Re:Run a completely new OS? (257 comments)

There was work done on single address space operating systems but retaining multiple protection domains - the Nemesis research OS did this. It sounds mad at first but every process can still have separate pagetables, they just happen to all agree on the virtual addresses of shared libraries, shared memory areas, etc. This means you can still make the OS secure (though admittedly it would not be compatible with modern address space randomisation strategies).

Honestly, I can't quite remember what the main benefits actually were!

L1 caches are indexed using virtual addresses, so I suppose it may improve the extent to which shared lib code remains cached across process switches. I can't see that it would avoid TLB flushes as such because you'd still want to clear out mappings that the process you're switching too shouldn't have access to... Does mean that data structures in shared memory can contain pointers that actually work but that doesn't sound *that* important.

I'm sure there was some other, more compelling reason but on commodity hardware I can't remember what it would be. Hurm.

about 8 months ago

Goodbye, Ctrl-S

Lemming Mark C-x C-s (521 comments)

I'm used to just randomly hitting Ctrl+X then Ctrl+S in emacs when I pause and my fingers have nothing better to do. Semi-frequently, I do this in other applications without even realising I just did it, with various mildly weird results...

about 8 months ago

Raspberry Pi Compute Module Release

Lemming Mark Re:Mostly pointless (51 comments)

I do remember a talk where Eben Upton said that the routing was relatively complex under the main chip. Pinning it out onto an edge connector presumably gives you the luxury of building a much simpler board to plug it into - design-wise and possibly cost-wise since you might get away with fewer layers.

Seems like small-to-mid volume manufacturers might find it handy, even though high volume manufacturers would presumably just plonk the chips directly on.

Not that I'm an electronic engineer, so obviously take this with a pinch of salt.

about 10 months ago

Linux Developers Consider On-Screen QR Codes For Kernel Panics

Lemming Mark Re:Good idea (175 comments)

As AmiMoJo also noted, when you have a kernel panic all bets are off regarding which parts of the kernel are OK. If the behaviour of the disk driver or filesystem have been affected, it could damage your filesystem to try to write a kernel dump into a normal disk partition. It might work but it does seem a good idea to be properly paranoid. I didn't know that Windows uses a special reserved area of the boot drive - that does make sense as a solution!

There have been various systems for crash dumping under Linux, though. I think the de-facto solution (the one that was accepted by the kernel devs) ended up being kdump, which is based on kexec (kexec is "boot directly to a new kernel from an old kernel, without a reboot"). This allows full crash dumps with (hopefully) decent safety, so it is possible to do this if configured.

In kdump, you have a "spare" kernel loaded in some reserved memory and waiting to execute. When the primary kernel panics it will (if possible) begin executing the dump kernel, which is (hopefully) able to reinitialise the hardware and filesystem drivers, then write out the rest of memory to disk. I'm not sure how protected kdump's kernel is from whatever trashed the "main" kernel but there are things that would help - for instance, if they map its memory read only (or even keep it unmapped) so that somebody's buffer overflow can't just scribble on it during the crash.

Obviously, having a full kernel available to do the crashdump makes it easier to do other clever tricks, in principle - such as writing the dump out to a server on the network. That's not new, in that there used to be a kernel patch allowing a panicked kernel directly to write out a dump to network, it just seems easier to do it the kdump way, with a whole fresh kernel. Having a fully-working kernel, rather than one which is trying to restrict its behaviour, means you can rely on more kernel services - and probably just write your dumper code as a userspace program! Having just installed system-config-kdump on Fedora 20, I see that there's an option to dump to NFS, or to an SSH-able server - the latter would never be sanely doable from within the kernel but pretty easy from userspace.

Various distros do support kdump. I think it's often not enabled by default and does require a (comparatively small) amount of reserved RAM. So that's some motivation for basic QR code tracebacks. I suppose another reason is if they expect they can mostly decipher what happened from a traceback, without the dump being necessary - plus, with a bug report you can easily C&P a traceback.

This discussion has just inspired me to install the tools, so maybe I'll find out what it's like...

about 10 months ago

Plan 9 From Bell Labs Operating System Now Available Under GPLv2

Lemming Mark Re:I find it interesting (223 comments)

I'll apologise in advance for rambling but I don't often get to talk about Plan 9 and it's nice to have the opportunity!

X11 itself as files would, I imagine, become a bit icky because it's a complicated protocol. But as I understand it, the Plan 9 windowing system was effectively exposed as files (i.e. the display server exported a filesystem interface to applications) and that did actually permit some pretty cool stuff...

Windows basically appeared as a set of files that let you draw to the surface within the window. The interface exposed to windowed apps by the windowing system could also be consumed by other instances of the windowing system, so that nesting instances of the windowing system into windows Just Works.

The fact that everything was in the filesystem meant that network FS shares could be used to e.g. to provide rootless graphical remoting of applications, etc - you just had to arrange for the right filesystem mounts to be available and the display would automagically work out.

Having said all that, obviously these days you'd also want to worry about direct hardware access to GPUs, etc, which I'm sure would make the whole enterprise rather more complicated! Maybe that would put paid to the idea of "everything is a file" being practical for display stuff, or maybe somebody cleverer than me could point out a better way of doing it!

Further tangent: Plan 9 made device files really behave like files, which meant you could remote device access trivially using remote filesystem mounts also. This doesn't work with Unix device files and it always seems a shame that various ad-hoc remoting protocols (USB over IP, Network Block Devices, etc) get used instead. But I suspect a similar argument to the GPU could easily apply - that it's either more efficient to have a specialist protocol or that some devices are just too complicated to meaningfully abstract like that. Who knows.

In some ways, I think I "miss" the olden days when if you'd got a Plan 9 system you probably could feel justified in believing you were in the future!

about a year ago

Linus Torvalds Gives 'Thumbs Up' To Nvidia For Nouveau Contributions

Lemming Mark Re:wow, you really get a sense... (169 comments)

I thought the Amber core was based on an older version of the ARM, in order to avoid IP problems. Not sure, though...

about a year ago

Wayland 1.4 Released — Touch, Sub-Surface Protocol, Crop/Scale Support

Lemming Mark Re:Not Wayland, but Weston (128 comments)

There was a SPICE backend as well, which also sounded interesting but I don't know what the status of that was.

1 year,6 days

Dell Joins Steam Machine Initiative With Alienware System

Lemming Mark Re:Why am I skeptical ? (110 comments)

My employer did buy a top-of-the-range Alienware desktop once because it was the fastest available machine for single threaded performance (at least, out of off-the-shelf options) due to its being factory overclocked. I imagine if we'd gone for a more boutique vendor we might have got faster but I suppose it was still good to have the support.

FWIW we weren't just playing games, we actually had long running single threaded simulations that we wanted to get out of the way as fast as possible! It's now my desktop PC after my previous one died - so that worked out OK in the end!

1 year,20 days

Emacs Needs To Move To GitHub, Says ESR

Lemming Mark Re:What's bzr? (252 comments)

I thought there was a fairly complex history here, since the current bzr was (I thought) bzr-ng originally, an alternative to some original Bazaar tool. And I thought that *that* came from GNU Arch, which (speaking loosely) I gathered wasn't well understood or enjoyable to use. I don't know how much of the current behaviour dates back that far, though, so there may not be too much in common now!

1 year,29 days

The Quest To Build Xbox One and PS4 Emulators

Lemming Mark For older games, consider Retrode (227 comments)

The Retrode is a brilliant little gadget: http://www.retrode.com/

It's basically an old-school console cartridge -> USB adaptor. It also supports old Megadrive / SNES gamepads and doesn't require host software (which is actually rather neat - it'll appear as a USB mass storage device with a cartridge image on it, plus presenting the controllers as either gamepads or keyboards). With further adapters you can plug in Mastersystem, Gameboy and N64 carts (plus two N64 controllers).

It's just a really nice piece of work. I use it to rip my cartridges, just like I rip CDs, then put them into whatever emulator I like. Avoids the legally dubious websites, etc. I can imagine there might be grey areas in some emulation stuff still (e.g. some emulators need a BIOS image, which someone has to have dumped from the console) but that's only for certain consoles - and at least you don't have to go on dodgy websites to download the games you already own.

about a year ago

Debian To Replace SysVinit, Switch To Systemd Or Upstart

Lemming Mark Re:WWBD? (362 comments)

I've heard tempting-sounding things about Debian kFreeBSD, actually - aside from anything else, BSD has a port of ZFS. So if you want something with a familiar userland (GNU utilies, Debian package management, loads of packages available) it does look quite appealing. I'm not sure how common it is to use ZFS under FreeBSD so far, though.

Also, there are Solaris distros out there, which is potentially another way to get the same effect. Nexenta started as one, though I remember them switching more to focus on server stuff since then...

about a year ago

Why Apple Went 64-Bit With the iPhone 5s

Lemming Mark Re: 64-bit BS (512 comments)

I don't think I'm really adding much here but the discussion of the 8051's quirks struck a chord with me! The 8051 is a bit weird in place, although in fairness with a C compiler you can just mash on through that and not worry too much. If you actually have to look at the architecture, you can definitely see its age, though. But for 8-bit stuff, the AVR architecture (Atmel's microcontrollers) genuinely are relatively nice, despite being just an 8-bit CPU. They are RISC CPUs, so they actually have a fair number of registers and comparatively few weird quirks (that I could see).

The other big advantage in my particular line of experience is that as long as a CPU has lots of registers, gcc often supports it. Otherwise you end up having to use slightly less mainstream compilers - which is basically OK, they're still nice software. But they're not as comfortable to me as the standard GNU toolchain. Of course, I'm sure plenty of commercial embedded programmers aren't familiar with the GNU toolchain and so don't care about that.

about a year ago

Intel Rejects Supporting Ubuntu's XMir

Lemming Mark Re:Layering? (205 comments)

I can speculate a bit with things that sound plausible to me given my knowledge of the system - but I might still be a bit off target... Still, maybe it helps a little.

Mir and Wayland both expect their clients to just render into a buffer, which clients might do with direct rendering, in which case the graphics hardware isn't really hidden from the client anyhow. AFAIK it's pretty normal practice that there's effectively in-application code (in the form of libraries that are linked to) that understands how to talk directly to the specific hardware (I think this already happens under Xorg). The protocol you talk to Wayland (and Mir, AFAIK) isn't really an abstraction over the hardware, just a way of providing buffers to be rendered (which might, have just been filled by the hardware using direct rendering).

In this case Xorg is a client of Mir, so it's a provider of buffers which it must render. The X11 client application might use direct rendering to draw its window, anyhow. But the Xserver might also want to access hardware operations directly to accelerate something it's drawing (I suppose)... So the X server needs some hardware-specific DDX, since Mir alone doesn't provide a mechanism to do all the things it wants.

As for why the Intel driver then needs to be modified... I also understand that Mir has all graphics buffers be allocated by the graphics server (i.e. Mir) itself. Presumably Xorg would normally do this allocation (?) In which case, the Intel DDX would need modifying to do the right thing under Mir. The only other reason for modifying the DDX that springs to mind is that perhaps the responsibilities of a "Mir Client" divide between Xorg and *its* client, so this could be necessary to incorporate support for the "Mir protocol" properly. That's just hand-waving on my part, though...

Bonus feature - whilst trying to find out stuff, I found a scary diagram of the Linux graphics stack but my brain is not up to parsing it at this time of day:

about a year ago

Intel Rejects Supporting Ubuntu's XMir

Lemming Mark Re:Layering? (205 comments)

I'm honestly not super clear myself! But the DDX is, as I understand it, the in-Xorg portion of the graphics driver. So I guess it's not unreasonable that that component needs to know it's not got complete control of the hardware, as opposed to the Xorg-only case where it would have. Presumably it needs to proxy some operations through Mir (or Wayland, for XWayland) that it'd normally just set directly.

A *bit* like running X under X using Xnest or Xephyr, though I'd imagine it's less extreme than that (since those, I'd guess, have to issue X-level drawing commands to their host X server, whereas to get graphics under Wayland/Mir they'd just render to a memory buffer like any Wayland/Mir client).

All slightly speculative since I'm not familiar with the in-depth technical details!

about a year ago

Official: Microsoft To Acquire Nokia Devices and Services Business

Lemming Mark Re:Beware of Microsofties bearing gifts (535 comments)

There used to be a village nearby (ish) to where I grew up, in which there were some peacocks that just roamed freely around the roads. If one of them felt like strolling in front of your car, you just had to put up with it. I remember seeing the peacock had gone to roost in the tree by the pub for the night, which at the time seemed a bit notable but - now you mention it - I didn't really know they could fly. I've never actually seen one take off before or since but apparently if they feel like it, they can get up there... Weird.

about a year ago



A History of 3D Gravity Games

Lemming Mark Lemming Mark writes  |  more than 6 years ago

Lemming Mark writes "Pretty much any gamer will have come across old school 2D Gravity Games. Starting with early games such as Lunar Lander and Gravitar, there's been a compelling purity to flying a ship with a single, downwards-pointing thruster, trying to make momentum and gravity your allies instead of your enemies. Whilst there are so many 2D Gravity Games in existence that most gamers have played at least one, 3D gravity games are thinner on the ground. Many people have not even seen one before. Despite this, there has been a slow but steady stream of titles in this niche, which have had their own devoted followers over the years. Recently, my own projects have led me to investigate this niche more thoroughly and put together a History of 3D Gravity Games to share the results of that research."


Lemming Mark has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?