×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

US Supreme Court: Patent Holders Must Prove Infringment

Forever Wondering Re:Now the next step... (143 comments)

It's not BS.

The USPTO lowered its standards [at the behest of Congress] to lower its standards to reduce its backlog. If an application is denied, it can be refiled [many times]. The only way to truly clear it is to approve it [and toss it into the court system]:
http://www.techdirt.com/blog/i...

$10,000 times 20 is a trivial amount [on a corporate level] compared to an NRE budget for a legit R&D outfit. There seems to be plenty of [unscrupulous] VC money to back such refilings to get something that can be used to [patent] troll others. The first round funding for even a small startup [post angel round] is minimally $10M. This translates into 1,000 refilings.

about 3 months ago
top

Man Jailed For Refusing To Reveal USB Password

Forever Wondering Strength test (374 comments)

According to a strength test, the password has only 49 bits of entropy, so it's surprising GCHQ couldn't crack it:

        < 28 bits = Very Weak; might keep out family members
        28 - 35 bits = Weak; should keep out most people, often good for desktop login passwords
        36 - 59 bits = Reasonable; fairly secure passwords for network and company passwords
        60 - 127 bits = Strong; can be good for guarding financial information
        128+ bits = Very Strong; often overkill

The checker had been posted on slashdot a while back [IIRC]:
http://rumkin.com/tools/password/passchk.php

about 3 months ago
top

Incandescent Bulbs Get a Reprieve

Forever Wondering Re:Some fixtures need incandescent (767 comments)

Thanks. Had read an article a while back about the trouble of getting the high lumen LEDs. State of the art may be advancing. But, my local store only had the 60 watt equiv LED when I was buying the incandescents.

about 3 months ago
top

Incandescent Bulbs Get a Reprieve

Forever Wondering Some fixtures need incandescent (767 comments)

While I've been using 90% CFL's for ten years, I have one fixture in the ceiling of a walk-in closet that needs an incandescent.

The bulb is inverted and is completely covered/enclosed. Can't use a CFL there [overheats the transformer]. Nor a halogen [too hot](?). Don't know about LED's or "high efficiency" incandescents, but the heat dissipation problem seems to be a factor. Can't change the fixture since I'm renting [and the landlord would be loathe to retrofit hundreds of units]. So, I don't have a ready replacement for my one remaining incandescent, so I stocked up on Jan 31. Prematurely, it seems.

While I like CFL's it seems most people don't. Particularly those families that have [small] children, since a broken CFL releases mercury, which is toxic. Also, I prefer the lumen output of a 100 watt equiv (27 watt CFL). Ultimately, I think LED's will be the long term solution. I did buy an LED just to try it, but the brightest I've found is barely the 60 watt equivalent.

This was one of the few cases where the regulation outpaced the technology.

about 3 months ago
top

End of Moore's Law Forcing Radical Innovation

Forever Wondering Re:links (275 comments)

Thanks for this. It could [very easily] be the one I was thinking of.

about 3 months ago
top

End of Moore's Law Forcing Radical Innovation

Forever Wondering Re:Rock Star coders! (275 comments)

There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.

Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].

Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.

Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.

about 3 months ago
top

How To Create Your Own Cryptocurrency

Forever Wondering Perfect for Company Scrip (203 comments)

- Saves on printing costs
- Makes direct deposit easier
- Helps attract trendy brogrammers
- Helps corporate tax avoidance without such plans as "Double Irish Dutch Sandwich"
- The new hot topic in boardrooms

about 3 months ago
top

UK Company Successfully Claims Ownership of "Pinterest" Trademark

Forever Wondering Premium Interest is too long a name anyway (133 comments)

Probably why they trade marked pinterest (3 syllables) vs "Premium Interest" (6 syllables). But, legalities aside [even if Pinterest is forced to change its name], the term pinterest is already poisoned for use by Premium Interest. The market already associates the term with pinterest.com. Trying to use it for another company is just adding an additional burden on Premium Interest.

Premium Interest appears to be a muddled/watered down version of reddit (2 syllables). (P)Interestingly, on the premiuminterest.com page, they have a "like" button called "Pi Score".

The smart (entrepreneurial/business/non-lawyering) way out for everybody: Sell the trademark to pinterest. Change premiuminterest.com to piscore.com (2 syllables).

Even without the trademark flap, "Premium Interest" is a lousy name for a business. Barry Diller changed askjeeves.com to ask.com. Everybody remembers Google but [virtually] nobody remembers AltaVista. Lycos? The exception that proves the rule, I guess ...

Time will tell whether Alex Hearn is half as savvy a businessman as Diller.

about 4 months ago
top

Linux x32 ABI Not Catching Wind

Forever Wondering Re:Subject (262 comments)

With x32 you get:
- You get 16 registers instead of 8. This allows much more efficient code to be generated because you don't have to dump/reload automatic variables to the stack because the register pressure is reduced.
- You also get a crossover from the 64 bit ABI where the first 6 arguments are passed in registers instead of push/pop on the stack.
- If you need a 64 bit arithmetic op (e.g. long long), compiler will gen a single 64 instruction (vs. using multiple 32 ops).
- You also get the RIP relative addressing mode which works great when a lot of dynamic relocation of the program occurs (e.g. .so files).

You get all these things [and more] if you port your program to 64 bit. But, porting to 64 bit requires that you go through the entire code base and find all the places where you said:
    int x = ptr1 - ptr2;
instead of:
    long x = ptr1 - ptr2;
Or, you put a long into a struct that gets sent across a socket. You'd need to convert those to int's
Etc ...

Granted, these should be cleaned up with abstract typedef's, but porting a large legacy 32 bit codebase to 64 bit may not be worth it [at least in the short term]. A port to x32 is pretty much just a recompile. You get [most of] the performance improvement for little hassle.

It also solves the 2037 problem because time_t is now defined to be 64 bits, even in 32 bit mode. Likewise, in struct timeval, the tv_sec field is 64 bit

about 4 months ago
top

Jury Finds Newegg Infringed Patent, Owes $2.3 Million

Forever Wondering Re:Diffie was awesome (324 comments)

I still have my original copy of the IEEE journal paper that I clipped in the 1970's. It stood out as a landmark paper then. About 15 years ago, I was at a technical talk and was able to get Martin Hellman to autograph it.

about 5 months ago
top

Is a Postdoc Worth it?

Forever Wondering Re:I'd do a postdoc (233 comments)

Let the person dream [just like the postdoc they want to do]

about 5 months ago
top

Intel Opens Doors To Rivals, Maybe

Forever Wondering Re:AMD may benefit (59 comments)

You're right. Graphene is probably the future. I think so too. If the [zero] bandgap problem can be solved. There was an article on /. recently indicating that somebody had made a breakthrough here. As an interim, graphene might be useful in replacing the copper/aluminum intrachip interconnect.

A graphene processor running at 100GHz would [probably] outperform a 32 core x86. Initially, a graphene chip would probably have far fewer gates. Thus, a true RISC ISA [like ARM] would be the logical choice. It could run x86 in software emulation (e.g. QEMU) and still beat the pants off the best Intel offering. Yum.

As to process/die shrinkage, I, too, have seen the announcements about the end of Moore's Law. They always seem to occur at the same interval of ML [2 years]. I've also seen articles about techniques to extend ML for the foreseeable future. No worries.

The other part of the puzzle is memory bandwidth. Getting a cache miss [no matter how large a cache (e.g. 12 MB)] slows things down to the point where advances in chip computation speed stall on memory transfer.

For that, we'd need a DRAM replacement like magneto-resistive RAM [or ferroelectric RAM]. MRAM has the same cell size as DRAM, retains data without power, and is at least 10x faster than DRAM. L1 cache is pure static RAM [which has active power draw and large footprint]. MRAM is as fast as L2 cache.

Hewlett Packard [which has been taking the lead in MRAM development] has a roadmap for MRAM deployment:
- Replace flash memory [and unlike flash, MRAM doesn't "wear out"]
- Replace DRAM
- Replace/eliminate L2 [and L3] cache
- Put MRAM at the heart of an SOC solution
HP is willing to license the tech to anybody that wants it. Unfortunately, the last announcement of any real progress was a while back.

about 5 months ago
top

Intel Opens Doors To Rivals, Maybe

Forever Wondering Re:AMD may benefit (59 comments)

ARM is starting to encroach on x86 in the server space:
- lower data center power requirements
- they're coming out with a 64 bit version
- ARM has a much smaller die footprint.
Intel must do ARM to stay in that game.

ARM would not be Intel's first foray into alternate architectures for x86 [8080, 8085, 8086, 80186, 80286, 80386, 486, 586, 686]. Remember Itanium [;-)] but also the 432. The Itanium and 432 didn't pan out because [the market for] x86 was so strong, but this indicates that while Intel is wedded to x86, it isn't slavish to it either. They care about making chips at a profit more than they do a given processor architecture. x86 has been a great tool to allow them to do that. But, x86 is just a means to an end for them.

When x86 ceases to be the asset it currently is, Intel will adopt whatever the market demands. The trend for this is ARM (vs. sparc, mips, etc.). At this point, [even] Intel can't kill ARM. ARM has too much demand for it now [it's a better solution for mobile and embedded/hybrid systems and will surpass in the server space in the near future]. Intel is adapting/reacting now, while it has time to do so on its terms instead of waiting 10 years and being forced to do it in a panic.

Contrast this with MS and Windoze. MS lost the mobile space race because of its insistence on Windoze. Intel won't make the same mistake, if for no other reason, they saw what it did to MS.

As to MS, most likely, in 10 years, we'll see MS/Office running on OSX/iOS, Linux, and Android with Windows just a fond memory.

Long term, Intel must become a foundry because it will lose its process generation edge (e.g. 22nm->14nm, but after 6(?)nm there isn't much room left. Others will catch up).

Intel will make money on this. In the mid 80's, Intel was selling its first generation 386 chip for $750. An Intel engineer told me that the same chip was designed to be profitable even if it sold [had to be sold] for $35.

about 5 months ago
top

Intel Opens Doors To Rivals, Maybe

Forever Wondering AMD may benefit (59 comments)

This forces Global Foundries to be more competitive with Intel, which benefits AMD.

GF, TMSC, etc. have been riding the [profitable] curve of being a generation back. That is, Intel is always a generation [or two] ahead, but also incurs significant R&D costs to do so. The competitors could wait and get the same results for far less investment in R&D. They could do this because Intel wasn't competing with them [by producing ASIC's, FPGA's, etc.]

This forces the non-Intel foundries to produce cutting edge stuff sooner. AMD was a big chagrined after spinning off GF and seeing it fall back into the TMSC model [making AMD less competitive against Intel].

The benefit for Intel is trifold:
- More ROI for their expensive fabs. Previously, costs were always recovered because the PC market was always expanding. With this now shrinking, a nextgen Intel fab may need to do piece work to stay profitable.
- Forcing the competition to compete head on [with the increased costs of being first generation], weakening them in the process [pun intended].
- A toe-in-the-water with ARM and mobile space [Atom notwithstanding] as a hedge against x86 arch going the way of the dinosaurs [without the stigma to x86 of a full fledged announcement of direct ARM support].

about 4 months ago
top

Researcher Offers New Perspective On Stuxnet-Wielding Sabotage Program

Forever Wondering Re:Interesting quote (46 comments)

I think "low yield" was referring to the nature of the over-pressure attack (vs. the rotor speed attack). Or, that things could have been orchestrated to damage/disable all centrifuges at one time [which would have been detected] instead of just increasing the failure rate [which, as Langner pointed out, would confuse/confound the Iranian engineers].

Langner talks a lot about avoiding detection circa 2007 but that being less of a concern in 2009 [e.g. "now that the program has achieved its objectives, let's shock the world with our cyber attack prowess"].

But, perhaps "uproar" was/became a desired result of Stuxnet. I recently got an email from my local congressman regarding defense against cyber warfare.

So, Stuxnet set back the Iranian program a bit. But, it also got Congress thinking about [read: funding] cyber warfare defense [offense is implied].

"Cyber warfare" [although, perhaps, a legitimate concern in the wake of Stuxnet] also becomes the "bogeyman under the bed" that could provide public justification for more NSA-like intrusion/trickery.

about 5 months ago

Submissions

Forever Wondering hasn't submitted any stories.

Journals

Forever Wondering has no journal entries.

Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...