×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

German Scientists' Visible Light Network Hits 3Gbps

stripes Cool, but... (79 comments)

...can anyone come up with a use for this that existing WiFi doesn't already cover? It isn't more range, and I'm not sure it is usefully less range. If you are worried about eavesdroppers on the network you need light tight rooms, but if you want to set up a whole house network you need to have repeaters for each room.

This seems more like an answer in search of a problem. Sometimes that means we will find a problem we didn't understand we had, and sometimes this turns out to be the technology equivalent of the big kitchen junk drawer full of bits that almost never get used.

about a year and a half ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:RISC is not the silver bullet (403 comments)

I've used PowerPC based CPUs in radiation environments without issues

Cool! Was it a designed to be radhard PPC, or was it a normal PPC that turned out to be radhard enough?

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:oversimplified (403 comments)

Not just production process though. They do well in many other areas of research. The ALUs are mighty well designed. They have plenty of great work in many many areas. I really do hate the instruction set, and I'm not fond of the company, but they do really good work in so many areas.

I think if you gave them and IBM equal research budgets and aimed them at the same part of the market it would be hard to predict who would win. Any other two companies though, and the bet is clear.

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:Thanks (403 comments)

That situation existed in the early 1990s/late 1980s when the terms CISC and RISC were invented. The x86 existed and was CISCy on the outside and microcoded inside. The VAX was the same. The arguments were never "you can't implement CISC internally the same as a RISC" because they were all already done that way. It was "if you avoid X, Y and Z in your programmer visible instruction set you don't need all that cruft in the chip". What makes something RISC or CISC was originally all about the instruction set, and I see nothing that has changed in the last 20 years that makes it useful to change the definitions.

Collapsing two useful words into one useless meaning doesn't add value to the language, it destroys it (well, not the whole language, just those two words). So why do it? If the new meanings actually had some value, sure I can see adopting new usage, but why switch to something worse?

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re: ia32 dates back to the 1970's -- B.S. (403 comments)

Say again? Are you telling me they had a 32-bit architecture in the 1970s...? I call BS.

No, but the way ia32 is binary compatible with the 16 bit x86 code from the 1970s makes it relevant. You still have to handle AL and AH as aliases to AX. Ask Transmeta how much of a pain that was (hint: that is a big part of why their x86 CPU ran windows like a dog...the other part being they benchmarked Windows apps too late in the game to hit the market with something that efficiently handled the register aliases). If x86 mode was a fully distinct mode that ditched anything from the past that Intel decided made stuff slow then yes, we would be talking about ia32 as a 1980s architecture.

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:RISC is not the silver bullet (403 comments)

First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC

LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.

There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.

Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.

Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:RISC is not the silver bullet (403 comments)

So far RISC is only found in low-power applications (when it comes to consumer devices at least).

Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. Embedded devices are use to be CISC, but now that tends to vary a lot, lower cost embeded devices (under $10) tend to be CISC, but over $10 tends to be RISC.

Ah! You might find CISC dominant in radiation hard environments. There is a MIPS R2000-based silicon on sapphire design in that space, but pretty much everything else is CISC (I haven't looked in a while, but that is a very slow moving market).

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:oversimplified (403 comments)

I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.

The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.

P.S.:

IA32 today is little more than an encoding for a sequence of RISC instructions

That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:Thanks (403 comments)

BTW, intel processors haven't been CISC for years. They're all RISC with a components that translates from the CISC instructions to RISC

Nice marketing talk. So was the VAX (most of them anyway - I think the VAX9000 was a notable exception) I mean it had this hardware instruction decoder, and it did simple instructions in hardware, and then it slopped all the complex stuff over onto microcode. In fact most CISC CPUs work that way - in the past all of the "cheap" ones did, and now pretty much all of them do. So if you call any CPU that executes "only simple components directly and translates the rest" it is hard to find any non-RISC CPU. Of corse internally they aren't so much "RISCy" as "VLIWy"...

The x86 is still the poster boy for CISC. (and hey, CISC isn't all bad pick up a copy of Hennasy and Patterson and read up on the relevant topics)

more than 2 years ago
top

The Linux-Proof Processor That Nobody Wants

stripes Re:RISC is not the silver bullet (403 comments)

Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back

FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:

  1. Apple I/Apple ][ on a 6502 (largely classed as CISC)
  2. Mac on 680x0 (CISC) then PPC (RISC), then x86 (CISC) and x86_64 (also CISC)
  3. iPod on ARM (RISC), I'm sure the first iPod was an ARM, I'm not positive about the rest of them, but I think they were as well
  4. iPhone/iPod Touch/iPad all on ARM (RISC)

So a pretty mixed bag. Neither a condemnation of CISC nor a ringing endorsement of it.

more than 2 years ago
top

Fragmentation Comes To iOS

stripes Re:totally incoherent! (244 comments)

Fragmentation is when you need to produce several subtly different versions of the same app that does the same thing because there's several different devices that all run what is allegedly the same operating system but each manufacturer has made little modifications that make them incompatible with everything else.

That is a bit of a narrow definition. I'll totally grant that that is fragmentation, but many other things are as well. Some are simpler to deal with then others (GPS/no-GPS-but-WiFi-psudo-GPS is only an issue if your app needs high accuracy position data). Needless software fragmentation is the most annoying because it doesn't really make life better for anyone while a lot of hardware fragmentation exists either to satisfy a price point (and therefore bring a device to a set of people that wouldn't have been able to afford it, or a feature to people willing to fund it without forcing others to do so), or because things do tend to get better year to year. The "our brand of Android has this and that extra, and that and the other changed a little" feels too much like the 80/90 Unix fragmentation that didn't make Unix users happy, and I think ultimately cost it the chance to win big on the desktop (or for a more charitable view, delayed victory until OSX came in...but I think that is wishful thinking, OSX has a non-trivial percentage of the desktop market, but Windows is dominant there). Now just because it worked out badly before doesn't mean it will do so again (we are not doomed to repeat the past, not always at any rate), but it still smells bad.

I would say iOS has a few differences from device to device. Many of which have graceful fallbacks. A very few of them do not. Even so it is just a tiny handful of things, and for the vast majority of apps comes down to supporting two very different screen layouts, and a third similar layout, and two sets of pixel density for artwork (or using lots of vector art). That is pretty much it for "normal" apps. Some apps need to worry because they push the CPU and/or the GPU very hard so they need to scale back on slower hardware, but that is mainly games and games do that everywhere except consoles.

more than 2 years ago
top

Fragmentation Comes To iOS

stripes Re:totally incoherent! (244 comments)

So you say that having two groups of users, one of which has constant IP connectivity, and one that does not, that's not fragmentation

No, having two groups of people that both sometimes have IP connectivity and sometimes don't, but at differing ratios is not fragmentation. My iPhone has no IP connectivity on large swaths of Highway One, esp. north of San Fran. Software attempting to deal with that is no different from software in my iPad which has cellular data hardware, but I've turned it off until I decide I want to reactivate my data contract. Software attempting to deal with both of those is no different then running on my wife's old (WiFi only) iPad.

more than 2 years ago
top

Ask Slashdot: Are The Days of Homebrew Gaming Over?

stripes Depends on what homebrew means... (181 comments)

The last few years we have seen Microsoft, Nintendo, Sony and Apple all bring out means to thwart homebrew development. The app store on both Android and iOS have taken many homebrew devs over to try and break the market.

Well I guess it depends on your definition of home-brew, but I think it is hard to make a game for iOS or Android that wouldn't be let into the store (unless you say crash on launch, or are noticed grabbing all the user's contacts without permission). It is in fact far simpler then it was to get your own games onto the Dreamcast! You get the real dev kit for very cheep (cheeper then the hardware you are developing for), and while the hardware to host the development on isn't free, it isn't exactly expensive (hardware dev systems for the 16bit era ran to $30k, now it is just a Mac mini, or pretty much any old PC for Android).

On the other hand if homebrew has to mean "we figured out how to get onto the hardware ourselves and made our own psudo dev kit", yes Android and iOS are hurting that effort because who really wants to go to all that bother when they could just get down to making a game?

more than 2 years ago
top

John Romero's Doomy View On Android and Ouya

stripes Re:Bullshit (375 comments)

The ST was litigated out of existence

Wikipedia and google don't show anything on that story. I had a 520ST and later a 1040ST, but later "found Unix" and lost touch with the ST. Do you have any pointers to this story?

more than 2 years ago
top

John Romero's Doomy View On Android and Ouya

stripes Re:FUD (375 comments)

to get into the big market (Android), it's well worth it

The market may be big, but does it pay? A lot of small developers have reported the android apps make a whole lot less then iOS apps ("an order of magnitude" sticks in my mind, but a quick google search shows a lot of 4x articles, and a smattering of 11%, but I didn't see an order of magnitude in the first page of results).

Assuming the 4x number is true, is it worth getting $1.25/app and writing C/C++ for 80% of the app, and then writing the last 20% in ObjC and again in Java -- or are you better off writing it all in ObjC and then starting work on the next app? (I imagine the right answer depends on how well served your core logic is by ObjC and the available frameworks, and also the total sales involved, and if you have another app that would make similar money, or if you are "played out" of good ideas)

more than 2 years ago
top

John Romero's Doomy View On Android and Ouya

stripes Re:FUD (375 comments)

The business logic for your app should be written in a platform agnostic way, and will be trivial to port.

Sure except...

...different platforms have different optimal workflows, and capabilities. This frequently drives changes into what you would think of as platform agnostic code. This is especially true of games but is true of most software. The effects of this can vary from just having a bad port (maybe a non-natiave feel, or just plain a kooky UI), to needing to re-write large parts of the "agnostic" code. This can be costly, and time consuming. Also if you have future versions of the products you need to decide if you want to port these changes back to the orignal platform (or platforms), or hold them apart. Both have their own sets of issues.

...even code that can be made platform agnostic isn't always as simple to write or as fast in platform agnostic form. For example use of CoreData on OSX/iOS is very platform specific, but it is tied to how your objects persist across executions, and even how you represent the objects. It can save an enormous amount of effort (save/load is trivial, undo/redo can be close to trivial, and so on). When it is the perfect fit as much as a third of the code you would normally need to write goes away.

Or if you look at Android, writing the "platform agnostic" part in Java gives you garbage collection so you spend very very little time hunting down memory leaks (you might end up with a few places that forget to nil out a pointer and end up pinning down extra memory for too long, but this isn't as common or painful as memory leaks in C/C++...). No debugging pointers that now dangle into the wring types of objects or to system heap structures. That can safe a whole lot of time.

However a platform agnostic core (business logic, or game play engine, or whatever) won't be able to use any of that. You have to restrict yourself to the intersection of what every platform you want to port to will have. I would be surprised if it cost you as much as having to write it twice, but not if it cost you a good 33% more then writing it platform specific.

Then you have the platform specific (UI?) part of your application. Could be pretty small for something like bug tracker, could be very large for a game or maybe a bike ride activity tracker. If making the core agnostic costs you 33% more, and then doing the platform specific part is significant the new platform has to be a very large percentage of the original platform's revenue before it is worth doing vs. making the faster, cheaper, but less flexible core logic and then moving on to a new project (or the next version of the current project).

I know this is sad when the platform you love is the underdog, but economics isn't called the dismal science for nothing.

more than 2 years ago
top

OS X Mountain Lion Out Tomorrow

stripes Re:designed to fend off malware (230 comments)

There is a checkbox in System Prefs to turn it off. Or if you control click on the app and select open it will launch (and white list for future launches).

It is really so people don't double click an app that has an icon that looks like a MP3. Or maybe they won't launch what looks like PhotoShop, but isn't. If it gets enough adoption from 3rd parties I can see it being a huge help to the average user. If it gets low adoption it'll be more useful for folks that really know what is going on.

more than 2 years ago
top

What's To Love About C?

stripes Re:because - (793 comments)

I think C's originators (or at least the still living one) changed his mind about some of them. From looking at the go language which targets the same programming niche some of these things have been addressed.

The semicolons are implied in most places now (as a side effect it enforces a brace style many people dislike, but happens to be my preference -- so even though I'm happy with C's semicolons this is a borderline positive change for me).

Declaration syntax has been made "more sane", which isn't surprising, and by the time K&R wrote the C book they had already started regretting it (one of the assignments was to parse C declarations into "english", look at what the authors wrote about it).

Go revamped switch (and a lot of the control flow operators).

Some of those changes might just be because computers have gotten a wee bit faster in the last 25 years or so, what constituted a great tradeoff on a computer with a 64K (split I+D) address space and maybe 512K max RAM a clock cycles measured in a few Mhz (oh, and these were multi user computers) is a wee bit different from what makes a good tradeoff now. (semicolons I think wind up here)

Some are likely to be a change they would still have made on the original system. (most of the control flow changes wind up here, likely variable decl too)

more than 2 years ago
top

Is Google the New Microsoft?

stripes Not so much (492 comments)

Sure, I admit there are similarities. Both are giant greedy companies. Both gobble up competitors, and when they are prevented form that they both launch competing products. I view Google's "Don't be Evil" lip service as about as transparent and self serving as the 1990's and 2000's era MS open source lip service.

On the other hand Google's own products are fairly decent. MS's are largely crap. Most times when MS buys a company the "adopted" products go rapidly to crap. Google's "adopted" products tend to trundle along for a while. MS was a creditable platform vender and most of the assaults on other companies were against those that built on top of MS's own infrastructure. Google has only made one Android related purchase that I can recall. Maybe that is an area ripe for future abuse, but for the moment they have not had their own "it ain't done 'till Word Perfect won't run" moment.

If Google is the new MS, then at least the trains run on time. (most days) It ain't much, but at least it is something.

more than 2 years ago
top

Ask Slashdot: Who Has the Best 3G Coverage In California and Nevada?

stripes ATT+VZW is the best option in CA (134 comments)

CA is a mighty big place, and I haven't traveled all that much of it. However I do happen to have phones on ATT's 3G network that can act as hotspots, and USB networking devices for VZW and Sprint. I don't have T-Moble because the coverage map looked like it wasn't really useful. I also have an RV, and have left "major city areas" quite a bit. I don't currently have any 4G networking gear (unless you count all the 3G stuff the ITU reclassified, in which case I have 4G but no LTE).

In my experience there is a lot of coastline that has no service from anyone. There are some inland areas where hills or mountains block signal from everyone. Some places I can get ATT but not VZW. Some places I can get VZW and not ATT. Around the coast it was pretty even. In land VZW seems to have a little bit of an edge, but not a lot. Many places I could get VZW and/or ATT but not Sprint. I can't recall any places I could get Sprint but neither ATT or VZW, but looking at coverage maps there might be such places, I've just never attempted to get signal there. However there is no substitute for actually testing in your location. Everyone has said VZW has the best coverage for years, but for years my house had no VZW service, while it did have spotty ATT service (recently VZW started serving the area, and also around the same time ATT's service picked up a lot as well)

Most places where I could get ATT and VZW 3G the ATT was faster. Sometimes it was even faster if the device showed "fewer bars".

The "reasonable best option" I would see is to get one device on ATT and one on VZW, and ignore the rest. My VZW device came form http://www.millenicom.com/ I don't know if they still sell them or not. They use to have a $50/month plan for 10G. It looked like a no-contract plan, but the way it was set up when you stop paying the monthly fee they want the device back or hit you with a big disconnect fee (and they charge for the device up front), however even so it was still a bit cheaper then other VZW data devices, just not by as much as it first looked. Things may have changed since then, so look around, but make sure you give them a peek. My ATT device is an iPhone (was a 3GS, then a 4, now a 4S...my wife and I take turns getting a new one each year). Another option is the new iPad, they have large up front costs, but a month by month plan (no fee to cancel, no fee to restart). From what I have read on the net only the VZW one currently supports hotspot sharing, ATT still hasn't gotten their ducks in a row there. Depending on what you want to do with the internet you might be just fine only having access on the iPad anyway though.

I have no data for Nevada. Last time I was in Arizona I didn't have a VZW device, but ATT seemed fine pretty much everywhere.

more than 2 years ago

Submissions

stripes hasn't submitted any stories.

Journals

top

How many boxes?

stripes stripes writes  |  more than 12 years ago

I use to work at UUNET, I moved my office many times. The last few times I had six boxes, then eight. I didn't even unpack four boxes after my last move (and I was in that office for about two years), I did paw through them, so they were about half full. When I left I dragged out more then eight boxes (I think).

I have been at my new job for a while, maybe 8 months. I just packed up my office. I couldn't fill half a box. Ok, I took all my framed pictures home (so Office Movers can't break them), and one book (it's hard to replace). Still, I could fill a half box. Really.

Pretty pathetic for a pack rat like me.

Of corse I was at the last job for nine years...

top

More lights!

stripes stripes writes  |  more than 12 years ago

Got one more light, it's the B400, the smallest monolight AlienBees makes. It is about 1/4 the power of my other two lights (White Lighting X1600s). I'm going to use the B400 as a hair light. I also got four honeycomb grids, the 10 which makes a spot about two feet around on the far wall, 20, 30, and 40 which I just haven't tried yet since the 10 does what I want (hmmmm, I wonder why I payed the extra $60 for the other two then...).

I also went out and got a ($3) halogen bulb to replace the wimpy one that the B400 came with.

The hair light stand was pretty cheep, but it doesn't go down below about two feet, which may make it hard to hide in some shots (like anything with someone laying on the floor).

So far I'm quite happy with it, but I need to take some more shots. It's a shame I didn't know about the bees since I think the B1600 would work as well for me as the X1600s and they are way cheeper. Ah well. At least I saved on the hair lights...

Slashdot Login

Need an Account?

Forgot your password?