Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Announces New Chips, Chipsets

michael posted more than 10 years ago | from the intelfanboydotcom? dept.

Intel 113

Saud Hakim writes "Intel showed a prototype of an IEEE 802.11a wireless LAN transceiver created by using a 90-nm CMOS (Complementary Metal-Oxide Semiconductor) fabrication. The chip can switch between different networks and frequencies; it is capable of tuning and tweaking itself. It is also capable of detecting what kinds of wireless networks are available nearby and shifting to the frequency that is most appropriate." Reader serox sends more: "Intel has two big news releases today and IntelFanboy has it covered. First up is the new Xeon processors have been released with a list of improvements. Second, Intel has revealed two significant milestones in the development of extreme-ultraviolet (EUV) lithography that will help lead to developing the next generation chip technology."

cancel ×

113 comments

Sorry! There are no comments related to the filter you selected.

Frist Post (-1, Offtopic)

OverlordQ (264228) | more than 10 years ago | (#9870555)

OMGn00b

Intel (-1, Troll)

Anonymous Coward | more than 10 years ago | (#9870568)

How long till 64bit processors?

Netcraft confirms... (4, Insightful)

Bi()hazard (323405) | more than 10 years ago | (#9871214)

Well actually Netcraft doesn't confirm it, and Intel may not be dying, but they are going downhill. Does anyone else find these releases underwhelming in light of the recent story about how AMD is pushing ahead while Intel stagnates and delays the releases of 4GHz and 64 bit technology?

Quite simply, Intel took shortcuts to get temporary advantages, and it's coming back to haunt them. The GHz myth is being dispelled and Intel is falling behind in the technologies that really matter. Today's new releases are only stopgap measures-a slight bump in the Xeon and some wlan card that's only going to be a minor player in an area Intel has not been focusing heavily on.

What is Intel focusing on? Branding. Marketing. Getting their stickers on everything and being known to the general public. Intel? "oohh they make computers!" AMD? "Durr is that those missiles in Iraq?" That may be why Intel still has a commanding lead in the processor market, but it will only take them so far. As word of mouth carries AMD to dominance in the hobbyist market, high end buyers will follow the hobbyists' lead. Enterprises will flock to 64 bit technology now that it is maturing on AMD, and still unavailable on Intel. Once AMD has taken control of the high-end market, the midrange will follow along like lemmings. All they know is, they want what the big boys have. And the big boys want AMD to go along with their fancy cars [shawnandcolleen.com] and fast women [spilth.org] .

This downward spiral will continue until Intel loses its position as the king of processors and becomes just another hardware company. Nobody will care about what your sticker says is inside, and consumers will win as competition and diversity increase.

A few years out, Netcraft will finally deploy their stunning new technology that can detect your processor type, even through NAT. At that point the truth will become stark and clear, slapping us all in the face with the blinding realization that... Intel IS DYING! You heard it here first, folks: The future belongs to BSD on AMD. Beowulf clusters of BSD on AMD. Wintel is Dying. Wintel is a decrepit artifact of the past, to be fondly remembered in museums along with the 8 inch floppy and "turbo" buttons.

p.s. Netcraft also confirms that the baby-shit BEIGE OF THE END TIMES is spreading like a cancer. Oh god its so horrible, what kind of sadistic bastard is behind this.

Re:Netcraft confirms... (1)

GuyFawkes (729054) | more than 10 years ago | (#9871768)


How on earth is this dribble insightful????
(yeah OK I know it was supposed to humorous, so why the fuck mod it insightful? oh yeah, it knocks wintel... nuff sed)

Netcraft? what the fuck? Every domain must have it's own webserver and every webserver must report truthfully what OS and hardware it is running and all of a sudden this will account for the vast majority of CPU's sold?????

OOh, intel is dying because of the Mhz myth and the scale of the chip features in nanometres.. yeah, and amd of course are not using silicon, or lithography, or indeed pretty much the same power consumption, oh no....

I know lets design an open source 128 bit cpu, that'll show those evil fuckers at wintel who'se boss....

and when you finish the first production run send two to slashdot, they could use one to bolster the server from these 505 errors and another to replace whatever piece of shit box this new scheme was designed on.

Re:Netcraft confirms... (2, Insightful)

TrancePhreak (576593) | more than 10 years ago | (#9871811)

I don't think 64bit is ready yet in the Windows world, which is probably where most of Intel's purchases come from anyways. Drivers are still lacking, Windows 64 is still not ready, etc.

Re:Netcraft confirms... (1)

fitten (521191) | more than 10 years ago | (#9872554)

Hell... 64-bit isn't ready yet in the Linux world either. It took me almost 2 months and 3 distros to get my AMD64 stable enough for everyday use.

Re:Netcraft confirms... (1)

mandolin (7248) | more than 10 years ago | (#9873042)

Mind mentioning on what you settled on? I've been considering getting an AMD64 setup.

Re:Netcraft confirms... (1)

drinkypoo (153816) | more than 10 years ago | (#9872963)

Intel is laying on the marketing because it works. Microsoft hasn't released x86-64 Windows XP, and why? There are obviously drivers for certain pieces of hardware and we'd see lots more if the damn OS were already out. Plenty of people would be willing to design a system around a version of x86-64 which supported only ATI and nVidia graphics cards, only VIA and nVidia chipsets, only adaptec scsi cards and 3com network cards, et cetera. I can only conclude that it is because intel and Microsoft are in bed to a sufficient extent to get Microsoft to delay it.

Meanwhile, intel is reputed to be working on a multi-core processor based on Pentium M. It's going to be a while before most people actually NEED more than 1GB of memory, let alone 2 or 4, even with multi-core processors. I don't think AMD is out of the woods yet.

Re:Netcraft confirms... (1)

hallgreng (794290) | more than 10 years ago | (#9874336)

AMD and Intel have been trading blows for years now. how was AMD's product line doing for six months before the A64? it was totally smoked by Pentum4's across the board. just because intel has some setbacks and isnt the fastest CPU for doom3 anymore doesnt mean that theyre spiraling into oblivion.

roffle (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9870575)

no really

roffle

Story is incorrect. (5, Informative)

mlyle (148697) | more than 10 years ago | (#9870577)

It's new Intel server platforms based on the Xeon that have been release; not new Xeons.

That being said, this really bulks up the low-intermediate end of the Intel enterprise offering.

Re:Story is incorrect. (1)

Jeff DeMaagd (2015) | more than 10 years ago | (#9871134)

Apparently there are new Xeons, ones that support Intel's renaming of x86-64, unless those have been out for a long while and no one told the Slashdot editors.

Re:Story is incorrect. (1)

mlyle (148697) | more than 10 years ago | (#9872090)

From the article:

The Intel Xeon processor, which was introduced in June, is the first Intel Xeon processor to offer Intel® Extended Memory 64 Technology (Intel® EM64T). EM64T helps overcome the 4-Gigabyte memory addressability hurdle, providing software developers flexibility for writing programs to meet the evolving demands of data-center computing. The processor also features Demand Based Switching with Enhanced Intel SpeedStep® Technology to dynamically adjust the processor's power usage up to 31 percent to reduce operating costs and heat issues.

Re:Story is incorrect. (1)

Jeff DeMaagd (2015) | more than 10 years ago | (#9872168)

I didn't realize it was out then, I thought it wasn't going to be available until mid-fall. I just hit the Intel site for information...

Re:Story is incorrect. (1)

bhtooefr (649901) | more than 10 years ago | (#9874625)

SpeedStep? Good - Intel's NetBurst chips needed it ;-)

On a more serious note, it looks like the Xeon is going to be a better Oppie competitor - x86-64, SpeedStep (read: Cool & Quiet), etc.

hot hot (5, Funny)

scaaven (783465) | more than 10 years ago | (#9870589)

now I can fry an egg on my LAN card too!

Re:hot hot (2, Funny)

b374 (799492) | more than 10 years ago | (#9870649)

I won't get one of those 'till they bundle it with a shiny cooler whith a neon (rhymes with xeon) led fan...

Re:hot hot (1)

jubei (89485) | more than 10 years ago | (#9870977)

I thought AMD was king of egg frying. [ncku.edu.tw]

Leakage Current and Heat (5, Insightful)

macklin01 (760841) | more than 10 years ago | (#9870593)

But the leakage current problems have been increasing with process shrinks (not just at Intel, but also at IBM and AMD). So they can use even smaller lithography. Great. Will the leakage current and associated heat suck even worse than Prescott?

Re:Leakage Current and Heat (0)

Anonymous Coward | more than 10 years ago | (#9870812)

Will the leakage current and associated heat suck even worse than Prescott?

No. Next question?

Re:Leakage Current and Heat (2, Informative)

Short Circuit (52384) | more than 10 years ago | (#9870825)

According to the article, they can use less power, due to the feature shrinkage.

I won't pretend to understand the relationship of power and leakage wrt feature size, though.

Re:Leakage Current and Heat (2, Interesting)

nelsonal (549144) | more than 10 years ago | (#9871233)

My very basic understanding of the relationship is this, it takes less power to cause a smaller semiconductor to switch states, however as you move wires closer together you start to have capacitive leakage and inductive effects from the wires. Up until a few years ago, you the former was signficantly larger than the latter, but in recent years they have become more equal in magnitude of effect.
I like to think of semiconductors (and most electrical things) in terms of fluid flow (not ideal but you can get the picture better). Imagine a water valve with both hot and cold water entry and leaving (they share a mixing area). When water arrives some processing is done that assigns it a path and when it hits the gate the force of the water opens the gate, after the water leaves it closes. If we shrink the valve down it will require less water arriving before it opens, however as we move the hot pipe closer to the cold pipe some undesired heat is transfered. This is something similar to the effects designers (and manufacturers are dealing with on semiconductors).

Re:Leakage Current and Heat (1)

Vancorps (746090) | more than 10 years ago | (#9871459)

I seem to recall an exact model of this at MIT. It was the water based transistor. Neat stuff and a great way to explain something that is actually pretty simple.

Re:Leakage Current and Heat (1)

corngrower (738661) | more than 10 years ago | (#9871241)

I read somewhere today that Intel engineers have developed a new compound to use for the insulating layer on the gates, to replace SiO2. This was said to reduce the leakage currents and allow finer lithograph. IIRC the article said they were planning to start using it for 55 nm lithography.

Re:Leakage Current and Heat (3, Informative)

mytec (686565) | more than 10 years ago | (#9871940)

I read somewhere today that Intel engineers have developed a new compound to use for the insulating layer on the gates, to replace SiO2

Yeah, it's called "high-K". Here is a link [physorg.com] .

Re:Leakage Current and Heat (2, Informative)

haroldhunt (199966) | more than 10 years ago | (#9871960)

> But the leakage current problems have been increasing with __process shrinks__ [my emphasis] (not just at Intel, but also at IBM and AMD).

Not really true. Leakage current doesn't increase significantly with just a process shrink; rather, it tends to be associated with process shrinks because one of the main reasons for a process shrink is to rev the clock rate up. In this case there is little reason to rev the clock rate on an 802.11a/b/g chip that is processing signals at pre-defined frequencies. In other words, they have held all other things equal and shrunk the die; leakage current should not dramatically increase.

You'll also notice that the article mentioned power savings as a result of the shrink: so the answer was already in the article. If the leakage current and heat were going to be worse than Prescott they would only have touted the cost savings of the smaller die, not the power savings as well.

Harold

Re:Leakage Current and Heat (1)

macklin01 (760841) | more than 10 years ago | (#9872147)

Thanks for the interesting responses, folks. I feel I've learned a lot. Perhaps I didn't RTFA well enough, but I was under the impression that these were two separate news items: one about wifi chipsets, and another about a new lithography technique that Intel would be using ubiquitously, including for future CPUs.

I definitely agree about the power savings from the process shrinks (thanks for the correction!); we saw those in the Coppermine->Tualatin shrinks and the Willamette->Northwood shrinks, where lower voltages could be used for a fixed speed (and indeed for higher speeds), leading to reduced power consumption and heat dissipation.

However, I was also thinking of the point brought up above, where the phenomenon of leakage gets worse the smaller the features are (and the lower the voltages are, since the relative ratio of such noises will thereby increase). Although the press release didn't mention these (and even cited the power reductions from above), I'm not entirely convinced that the problem has been solved (although I take heart the SiO2 replacement). After all, Intel was saying very recently that the Northwood->Prescott would yield power savings, but it turned out that at this scale, the leakage problems nearly negated the expected savings.

Very interesting. I wish I understood these issues better, and I'm glad that there are others here to help explain!!! Thanks -- Paul

90nm 802.11a (-1, Troll)

hpa (7948) | more than 10 years ago | (#9870598)

What a concept, someone having a <em>prototype</em> of a product others is already shipping, built in a process which both they and others are shipping product from.

If this is news it's only because of plain pathetic performance of Intel in the wlan segment so far.

em... (0)

Anonymous Coward | more than 10 years ago | (#9870810)

What a concept, someone having a prototype of a product

the "em" tag is not supported on slashdot - only the "ahem" tag...

Can't wait for EUV lithography! (4, Funny)

dFaust (546790) | more than 10 years ago | (#9870600)

It's just like Ultraviolet lithography.... TO THE EXTREEEEEEME!!!!!

Hey, at least they didn't spell it "Xtreme"

Re:Can't wait for EUV lithography! (1, Funny)

Anonymous Coward | more than 10 years ago | (#9870697)

It's just like Ultraviolet lithography.... TO THE EXTREEEEEEME!!!!!

I'm confused as to why this wasn't announced on sunday, Sunday, SUNDAY!!!!

Re:Can't wait for EUV lithography! (2, Funny)

SharpFang (651121) | more than 10 years ago | (#9870708)

Extreme ultraviolet? They SHOULD have used the X... After all it's called X-rays.

Re:Can't wait for EUV lithography! (1)

Steve525 (236741) | more than 10 years ago | (#9872182)

That's actually a funny story, with more point than you realize. A while ago, a number of groups spent a lot of money on x-ray lithography, without any commercial success. Because of this x-ray lithography has a bad reputation. So, to distance the technique from x-ray lithography, and to more closely align it with the very successful optical lithography, they changed the name to EUV lithography from projection x-ray lithography.

This also points out an interesting cultural difference between Americans and Japanese. In America the proper think to do is distance yourself from an unsucessful technique. The Japanese, however, still call it x-ray lithography, because after sinking boatloads of money into x-ray lithography they don't want to feel they were wrong.

Re:Can't wait for EUV lithography! (1)

Bender_ (179208) | more than 10 years ago | (#9872523)

What a load of c... inaccuracies...

The used wavelength (afair 15.4nm) is stil far from hard x-ray. The technologies for generation, mask, and "optics" of x-ray and EUV radiation is very different.

Cool - I'm going to get an x86-64 Dell (dude) (3, Interesting)

Virtual PC Guy (720945) | more than 10 years ago | (#9870608)

Yay - now it will be easy for guys like me (lazy people who don't feel like assembling machines by hand anymore) to get an x86-64 box from Dell:

http://www1.us.dell.com/content/products/compare.a spx/precn?c=us&cs=04&l=en&s=bsd [dell.com]

Or should I say 'Intel® Extended Memory 64 Technology' (whatever guys - everyone knows that it is just AMDs tech)

Re:Cool - I'm going to get an x86-64 Dell (dude) (0, Flamebait)

geekoid (135745) | more than 10 years ago | (#9870634)

yes, because AMD has made there business with unique solutions.... oh wait.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Jeff DeMaagd (2015) | more than 10 years ago | (#9870664)

Are you sure that isn't the old memory segment technology introduced with the P6 die? I don't think that is x86-64.

"Intel® Extended Memory 64 Technology" doesn't say 64 bit processing. It is kind of like the old memory segmentation method to reach beyond the limits of the address register.

Re:Cool - I'm going to get an x86-64 Dell (dude) (4, Informative)

hpa (7948) | more than 10 years ago | (#9870680)

No, it's not. EM64T, or IA32e (make up your mind, guys) is Intel's clone of AMD64/x86-64.

You're thinking of PAE.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Jeff DeMaagd (2015) | more than 10 years ago | (#9870780)

Interesting. I thought they weren't due out until later this year, and only on Xeons, but it looks like the Dell site shows it is available for some P4s too.

Re:Cool - I'm going to get an x86-64 Dell (dude) (0)

Anonymous Coward | more than 10 years ago | (#9870829)

Interesting, it looks like you are a complete retard that doesn't bother to learn what anything about a subject before you claim to be an expert. Idoit.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Anonymous Crowhead (577505) | more than 10 years ago | (#9870915)

Interesting, it looks like you are a complete retard that doesn't bother to learn
what anything about a subject before you claim to be an expert. Idoit.


You do that too?

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Cheeko (165493) | more than 10 years ago | (#9870838)

Are these actually 64 bit CPUs or are they simply 64 bit memory extentions? Anyone know the internals of EM64T that can elaborate? Register sizes, instruction issues, etc?

I'm curious what the internals comparison is between "extentions" and straight up 64 bit processing.

Re:Cool - I'm going to get an x86-64 Dell (dude) (0)

Anonymous Coward | more than 10 years ago | (#9870945)

They are instruction-set identical to AMD64. They have cruddy memory subsystems in comparison, though, can't do IO DMA memory mappingto higher than the 32-bit address space, so you still need different drivers for AMD64 and EM64T.

Intel will presumably rely on the MHz myth again to beat AMD's superior hardware. And people will fall for it. Again.

Re:Cool - I'm going to get an x86-64 Dell (dude) (2, Informative)

Anarke_Incarnate (733529) | more than 10 years ago | (#9871282)

except that AMD64 has 40 bit memory addressing while the EM64T shite has 36bit memory addressing. Read the stuff Redhat had to do to make it work with their kernel. Intel kludged this one.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Virtual PC Guy (720945) | more than 10 years ago | (#9871041)

http://www.intel.com/technology/64bitextensions/ [intel.com]

Has the details for you -

Intel® Extended Memory 64 Technology is one of a number of innovations being added to Intel's IA-32 Server/Workstation platforms in 2004. It represents a natural addition to Intel's IA-32 architecture, allowing platforms to access larger amounts of memory. Processors with Intel® EM64T will support 64-bit extended operating systems from Microsoft, Red Hat and SuSE. Processors running in legacy* mode remain fully compatible with today's existing 32-bit applications and operating systems.


Hmm... a Memory extension that allows you to run 64bit operating systems. Got to love marketing talk.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

Cheeko (165493) | more than 10 years ago | (#9871624)

That really doesn't answer the question so much. It does clarify that it runs both memory modes, but what about the actual processing portion of the core. RISC processors (the ones these are supposedly going to cut into) have access to 64 bits for memory, but also have significantly more powerful processing mechanics. As I understand this from the market speak (which is confusing at best, hence the question) this is basically the same exact processing core as the standard x86 with some tweaks to allow it to take the larger memory. How does this effect data types, math operations, instruction size, overall number of instructions, processing logic, etc. Basically how does this impact all the OTHER aspects of a processor besides just the amount of memory it has access to. Higher precision floating point perhaps? stuff like that? I'm sure this is all in the specs, but reading technical documents isn't my idea of a fun time. Thought maybe someone here had internals info and could just post.

Re:Cool - I'm going to get an x86-64 Dell (dude) (0)

Anonymous Coward | more than 10 years ago | (#9872184)

posting AC, but
64 bit address space
32 bit data space
thus: 32 bit instructions
-ac

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

fitten (521191) | more than 10 years ago | (#9872484)

The AMD64 running a 64-bit compiled binary is a 64-bit CPU. Addresses are 64-bit values (although only 40 bits are "honored" as far as the hardware goes right now). sizeof(long) == 8. etc. Doubles are still 64-bit, I don't think they do long double or anything like that (no 128-bit floats). The Intel 64-bit extention stuff is supposed to be binary ISA compatible with the AMD64 x86-64 ISA.

As far as other stuff, summaries of the AMD64 programming model can be found all over. There's probably one on ArsTechnica.

Re:Cool - I'm going to get an x86-64 Dell (dude) (2, Interesting)

Slack3r78 (596506) | more than 10 years ago | (#9871177)

Actually, EMT64 is an incomplete clone of x86-64 by most reports, and doesn't appear to be binary compatible with x86-64. The x86-64 bit Linux distros are having to hack in support for the CPUs that essentially still does paging in software.

On top of that, all the ALUs on the CPU are still 32 bit, and it does not support the NX bit. There's a reason why Intel is only touting it as an "extended memory" architechture. It's an incomplete hack on top of the existing 32 bit chips that seems like nothing more than an attempt to save face by Intel.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

IGnatius T Foobar (4328) | more than 10 years ago | (#9871937)

Heh. Intel has been very careful about choosing its words. They're doing press releases that say things like "Xeon 64 will run software currently being developed for the AMD Opteron with very little modification." They categorically refuse to call their new chips "AMD64 compatible" even though that's exactly what they are. They licensed the AMD64 instruction set and renamed it.

Ben Williams of AMD even said, "AMD welcomes Intel to the world of AMD64." [com.com] Heh.

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

networkBoy (774728) | more than 10 years ago | (#9872218)

They licensed the AMD64 instruction set and renamed it.


That's the biggest load of horseshit on the planet. The chip is not a 64 bit processor it is a 32 bit processor with a 64 bit memory address space. . . . moron.
nB

Re:Cool - I'm going to get an x86-64 Dell (dude) (1)

andreyw (798182) | more than 10 years ago | (#9870790)

Uh yeah - but do realize that the Intel CPU will naturally be of a completely different design. Its not like they are rebadging AMD chips. I.e. you will feel like a winner if the Intel design is better than Athlon64/Opteron, otherwise you'll just feel like a loser stuck with overpriced sucky hardware.

a? wtf? (3, Interesting)

rokzy (687636) | more than 10 years ago | (#9870609)

isn't 802.11a the old one that had a few benefits in certain situations over 802.11b, but is now superceded by 802.11g?

Re:a? wtf? (5, Informative)

hpa (7948) | more than 10 years ago | (#9870642)

Not really. 802.11a operates in the 5 GHz band, and can thus coexist with 802.11b without suffering degradation, unlike 802.11g which does degrade when .11b devices are present -- if nothing else because the .11b devices hog the channel for 5 times as long.

Thus, heavy-use WLANs like corporate installations are frequently A+G, and a lot of current wlan client chips are also A+G.

In the current wlan market, 802.11a is the premium solution; unfortunately both in terms of cost and performance.

Re:a? wtf? (5, Informative)

ElForesto (763160) | more than 10 years ago | (#9870843)

It's worth noting that 802.11a has a significantly shorter theoretical maximum range when compared to the 2.4GHz (802.11b/g) solutions.

Re:a? wtf? (2, Interesting)

Jeff DeMaagd (2015) | more than 10 years ago | (#9870982)

It's worth noting that 802.11a has a significantly shorter theoretical maximum range when compared to the 2.4GHz (802.11b/g) solutions.

That is true but it is also far less crowded, with five or eight available channels in most countries. With the recent FCC posting, "a" is considered an indoor technology. I get pretty good range with "b" - something pretty close to the claimed 1000ft with the equipment I have, but that is with no obstructions. I really don't need that sort of range. The range problems a lot of people have with APs typically involve poor location and nothing more.

Re:a? wtf? (0)

Anonymous Coward | more than 10 years ago | (#9870733)

Offtopic? and a guy above was called Troll for nothing. wow, the Intel shareholders are really out in force today.

In other news... (4, Funny)

SharpFang (651121) | more than 10 years ago | (#9870615)

Texas Instruments released a new microcontroller based on the revolutionary TTL ( Transistor-Transistor Logic) technology!

Re:In other news... (1)

JamesP (688957) | more than 10 years ago | (#9870755)

Parent is funny

And moderator of parent should NOT be posting in slashdot...

Sorry!!!

Re:In other news... (0)

Anonymous Coward | more than 10 years ago | (#9870804)

Parent is not funny.

And moderator of parent should NOT let you post.

Not Sorry!!!

Re:In other news... (0)

Anonymous Coward | more than 10 years ago | (#9870995)

Acually, JamesP is correct, the original posting is funny, and the moderator is a clueless twit who shouldn't have mod points

10 GHz? (5, Interesting)

Pusene (744969) | more than 10 years ago | (#9870644)

Too bad this type of wireless sytem is not allowed to use in better parts of the world, due to the regulation of radio frequencies. Why not use this adaptive frequency model in CPUs. Let the clockspeed scale with the load on the processor! (I meen scale in 30 MHz increments or something, not step between two speeds like it does now on some CPUs!)

Re:10 GHz? (3, Informative)

Short Circuit (52384) | more than 10 years ago | (#9870845)

Why not use this adaptive frequency model in CPUs.

They do. It's called SpeedStep or LongRun.

Re:10 GHz? (4, Funny)

IPFreely (47576) | more than 10 years ago | (#9870963)

Too bad this type of wireless sytem is not allowed to use in better parts of the world, due to the regulation of radio frequencies.

That's OK. I don't live in the better parts of the world. I live in the US.

Re:10 GHz? (1)

mdemirha (781221) | more than 10 years ago | (#9871000)

Just FYI, the operating frequency of the radio has *NOTHING* to do with its speed. At whatever the frequency the radio operates on, it uses a fixed amount of frequency width (which is on the order of 30 MHz not GigaHz). So, if I am on 10 GHz, it means that I am allocating frequencies between (10GHz-15MHz) and (10GHz+15MHz). It doesnt mean that I have a CPU running at 10 GHz. Operating speed of these radios are based on reception power which is generally inversely (and exponentially) proportional with the distance from the destination. So anyway, there is nothing common between a radio and a CPU except the fact that the radios genereally include a small CPU inside them for their computation purposes.

Re:10 GHz? (1)

Jeff DeMaagd (2015) | more than 10 years ago | (#9871310)

It seems like people are misunderstanding what you mean, but you talk about two disparate things in one paragraph.

10GHz is still pretty expensive to deal with for consumer commodity parts for wireless radio. 5GHz is a hard enough sell as it is.

I'm not sure why CPUs don't have a larger range of speeds for dynamic clocking. There may be little power savings benefit for clocking slower than the minimum speed, and not much benefit to having intermediate speeds if the system can switch between the two frequencies when the load changes.

Re:10 GHz? (1)

Pusene (744969) | more than 10 years ago | (#9871816)

Thanks for understanding me. Isn't this /.? I though us geeks ought to be able to hold two thoughts in our heads at the same time!

Re:10 GHz? (1)

MarcQuadra (129430) | more than 10 years ago | (#9871949)

AFAIK, 'harvard architecture' CPUs like the ancient 68040 in my Quadra could be clocked ALL the way down, even stopped if need-be. When I heard that Intel was introducing 'SpeedStep' so their CPUs could drop from 500 to 400MHz (or whatever) to save some juice I couldn't help but think that they missed the boat entirely. You could make very cool, very quiet laptops if you had CPUs that would just clock themselves based on a signal from the memory controller signalling how busy the bus was (bus saturation exceeds 30% for 2 seconds, clock up; drops below 20% for 5 seconds, clock down).

Re:10 GHz? (1)

mlyle (148697) | more than 10 years ago | (#9872165)

That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

Harvard architecture [wikipedia.org] refers to seperating instruction and data memories, unlike Von Neuman architectures you find most places. Harvard architectures are still popular in many microcontroller families, though.

Whether parts are certified for static operation (e.g. clock frequency down to 0Hz) is a completely different matter.

Re:10 GHz? (1)

MarcQuadra (129430) | more than 10 years ago | (#9872262)

Ahh, that's right. I'm a bit rusty with my old CPU technologies.

Re:10 GHz? (1)

Bender_ (179208) | more than 10 years ago | (#9872546)

That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

That is not 100% accurate. Actually it is common to designate CPUs as a harvard architecture when they use separate data and code caches. For example it is impossible on the 68040 to modify code that resides in the code cache.

Re:10 GHz? (1)

mlyle (148697) | more than 10 years ago | (#9873063)

This is semantic. Harvard architecture implies seperate paths for data and instructions. The path into the CPU for the instructions is the same as the path into the CPU for data.

On the 68040, yes, there is a separate I-cache that isn't coherent with memory writes. But it is quite possible to use instructions that operate on data memory to modify code-- as long as you're sure to invalidate the i-cache before the code runs.

Yes, I admit some people use the term harvard architecture to refer to processor architecture where code and data diverge early.. but by this definition even the pentium iv is a harvard architecture with its trace cache-- despite it being programmed and having a memory map and external interconnects just like traditional von neumann machines. I prefer sticking with the original context of the term.

Re:10 GHz? (1)

drinkypoo (153816) | more than 10 years ago | (#9872832)

Intel's new method of throttling is to take less instructions off the queue per unit of time. The CPU does less work, so less gates switch, so power dissipation as heat is reduced. Why change clock rates when you can just process less instructions?

Non-atrocious colours for those who care! (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9870671)

Happily surfing with Bobcat Lynx 386 in pure DOS (0)

Anonymous Coward | more than 10 years ago | (#9872101)

What, there's been a change in the color scheme?

Re:Happily surfing with Bobcat Lynx 386 in pure DO (1)

wizzardme2000 (702910) | more than 10 years ago | (#9872502)

Yeah, its all yellowish and hard to read in the IT section. However, the 500s + 503s seem to have dissipated.

Wake me when (3, Insightful)

wowbagger (69688) | more than 10 years ago | (#9870685)

Yawn. Wake me when Intel has released real, production ready (NOT 0.2) drivers for Linux for this, or any other modern wireless network chip.

Re:Wake me when (2, Interesting)

Anonymous Coward | more than 10 years ago | (#9870775)

The specs are publically available. Instead of sitting around whining, why don't you get off your ass and write the drivers yourself?

Re:Wake me when (1)

HolyCoitus (658601) | more than 10 years ago | (#9870870)

Instead of posting anonymously defending a company with billions of dollars who refuses to write the drivers, why don't you divert your energies to signing up for a Slashdot account.

While you're at it, maybe you should think about how retarded that statement you just made was, and rethink it. An acceptable retort would be "Linux sucks, I personally hate it, and Intel is doing the right thing by ignoring it. If you feel differently, write it yourself!" which is what your statement came off as to begin with.

Re:Wake me when (2, Insightful)

debilo (612116) | more than 10 years ago | (#9870913)

Uhm. I think one could expect a vendor to provide drivers themeselves. You actually have to pay for their products, remember? You give them money, you make them rich. I really don't feel like giving money to a company just to find out that I also pay them for limiting my choice.

Grandparent was right, you are wrong.

Re:Wake me when (0)

Anonymous Coward | more than 10 years ago | (#9872511)

Yawn. Wake me when there is practically ANYTHING released on Linux that isn't 0.X versioned.

Wake ME when the publish DSP firmware exemplars (1)

Ungrounded Lightning (62228) | more than 10 years ago | (#9872735)

Yawn. Wake me when Intel has released real, production ready (NOT 0.2) drivers for Linux for this, or any other modern wireless network chip.

Wake ME when they publish the source for the DSP firmware for the chip/core.

a) Visibility into the firmware is just about mandatory for writing your own driver. API documentation is better than nothing, but it's often not enough.

b) Drivers are relatively easy compared to doing work in the signal processing portion. While the FCC really doesn't want you to be changing stuff in there, keeping it secret stifles competition by making it difficult for any but large and well-funded players to build products around the chips.

Re:Wake ME when the publish DSP firmware exemplars (1)

wowbagger (69688) | more than 10 years ago | (#9873845)

They may not be using DSPs as much as FPGAs/ASICs - a great deal of the signal processing for that sort of thing is easier done as parallel blocks of hardware than software.

The FCC is no more worried about you mucking around in the modulator/demdoulator as the driver - either will allow you to cause interference.

(A guy who does Software designed radio for a living.)

Press Release links (2, Informative)

mobby_6kl (668092) | more than 10 years ago | (#9870783)

why would somebody link to a forum reposting the official press release? (well ok I think I know)

New Server Platforms [intel.com]
EUV Lithography [intel.com]

Mesh This! (2, Funny)

lofi-rev (797197) | more than 10 years ago | (#9870803)

Now if everybody would just carry around one of these devices and cooperate in a mesh network then I could finally achieve my dream of....

Well, it would be really cool.

Xeon Nocona / Lindenhurst Embedded Core Available (3, Informative)

starannihilator (752908) | more than 10 years ago | (#9870809)

There has been a great deal of discussion regarding the availability of the Lindenhurst chipset [theinquirer.net] , and WIN Enterprises [win-ent.com] is pleased to offer developers the latest Xeon technology for their embedded controllers and platforms. WIN Enterprises, Inc., a leading designer and manufacturer of customized embedded controllers and x86-based electronic products for OEMs, has announced the availability of the latest Intel 64-bit Xeon core module for developers of high-performance embedded platforms - Nocona / Lindenhurst [win-ent.com] . WIN Enterprises is pleased to offer leading-edge, long-life solutions based on Nocona / Lindenhurst for everything from embedded single board computers to platform systems. For OEMs looking to incorporate the newest Xeon technology, WIN Enterprises has developed a proven core module for Nocona / Lindenhurst to create custom embedded controllers. "We have spent an extensive amount of time debugging and perfecting this specific core module," said Chiman Patel, WIN Enterprises' CEO and CTO. "This will allow our OEM customers to bring their application-specific Nocona / Lindenhurst embedded products to market quickly and cost-effectively." For more information, please contact WIN Enterprises at 978-688-2000 or sales@win-ent.com. Visit www.win-ent.com to learn more about WIN Enterprises' embedded design and manufacturing services.

Re:Xeon Nocona / Lindenhurst Embedded Core Availab (2, Insightful)

drinkypoo (153816) | more than 10 years ago | (#9872888)

Score 2, Informative? Where's my (-1, Unpaid Advertisement) mod?

?complementary? (1)

Chuck Bucket (142633) | more than 10 years ago | (#9870905)

"And how much *is* this Complementary chipset?"

CB

Intel wireless is teh sucks (2, Insightful)

leathered (780018) | more than 10 years ago | (#9870969)

Well I only hope this new wireless performs better than Centrino. It's not like integrating WiFi into a chipset is rocket science as all chipset makers are at it now. Oh and this time, some Linux drivers right off the bat, please.

At the moment Centrino pairs an excellent low power, good performing processor (Pentium M); with the one of the poorest performing Wi-Fi solutions you can get. But look at how they've marketed it on it's poorest facet, with Centrino you can read your email on top of Everest, browse the web while skydiving with no mention of the strengths of the Pentium M. Almost as bad as 'Pentium 4 speeds up your internet experience' campaign. I've had people asking me about getting a new laptop because they think Centrino is the only way to get WiFi, if only they knew they can get a better performing wireless card for the price of a few beers.

Wow! Now I can play DOOM3 at 60fps (0)

Anonymous Coward | more than 10 years ago | (#9871199)

maybe. Great news.

spoqnOge (-1, Troll)

Anonymous Coward | more than 10 years ago | (#9871202)

networking test. DOG THAT IT IS. IT You can. No, = 1400 NetBSD fucking numbers, indecision and of America (GNAA) BSD machines, li5t of other Everyday...Redefine

EM64T == AMD64 (1)

tuc (704304) | more than 10 years ago | (#9871226)

So now people like me who don't think it makes sense to buy a x86 that can't handle 64 bits, and who (unlike me) don't have confidence in AMD, can start buying x86 chips again.

Tell me, is EM64T [intel.com] truly identical to AMD64 [amd.com] or are there small differences? I'm curious.

Re:EM64T == AMD64 (1)

corngrower (738661) | more than 10 years ago | (#9871611)

well, I think one has AMD written on the package and the other has intel written on it, so they're definitely not identical. - or were you talking electrically or in terms of timing and logic?

re: EM64T == AMD64 (1)

tuc (704304) | more than 10 years ago | (#9871919)

Laugh while you can, monkey boy, but I'm worried that EM64T [intel.com] is just enough different from AMD64 [amd.com] to give us all headaches.

Did someone [slashdot.org] just say that the DMA implementations are different enough that device drivers will not be compatible? Tell me more about that.

Re:EM64T == AMD64 (0)

Anonymous Coward | more than 10 years ago | (#9872093)

You don't trust AMD enough to buy their CPU but you want to trust Intel to make a cloned version of *their* architechture?

re:EM64T == AMD64 (1)

tuc (704304) | more than 10 years ago | (#9872992)

You don't trust AMD enough to buy their CPU but you want to trust Intel to make a cloned version of *their* architechture?

If you take a look it my original post, I think I made it clear that I do trust AMD. Well, at least as far as I trust the x86 architecture, which I'm not wild about.

But business users seem to be wild about the x86s and also wary of AMD, hence my worries. (And no, I don't trust Intel to make a decent clone, which is why I was asking about differences.)

Oh great (1)

foidulus (743482) | more than 10 years ago | (#9871244)

Are they going to have to tweak the Duke Nukem Forever engine to take advantage of alll these features?

I don't see the connection... (1)

frank_adrian314159 (469671) | more than 10 years ago | (#9872075)

The chip can switch between different networks and frequencies; it is capable of tuning and tweaking itself.

I don't see how this has anything to do with the 90 nm process. We've had the technology to do this for quite a while. Just have the right frequency divider on the VFO for demod and you have the frequency switching. Run it over the bands sequentially and you've got autodetect. Program one or two algorithms into the firmware and you have all the tweaking you'd ever need. Is this just some other chip they happened to mention when the new 90nm Xeons came out? Because, quite honestly, I don't see why you couldn't do the same thing on a less refined process already (and probably with less cost and more stability).

CMOS (1)

Mr.Zong (704396) | more than 10 years ago | (#9873625)

Sweet. I didn't know you could use my computers muffed up clock to make chips. Rock on. Now if I can just figure out how to use the BIOS to make some dip, I'll be in freakin heaven.

Legacy connections lost (1, Informative)

Anonymous Coward | more than 10 years ago | (#9874499)

No PS2 connections, no serial, no parallel. USB or forget it.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?