Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IEEE Seeks Data On Ethernet Bandwidth Needs

Soulskill posted more than 3 years ago | from the more-better-faster dept.

Networking 117

itwbennett writes "The IEEE has formed a group to assess demand for a faster form of Ethernet, taking the first step toward what could become a Terabit Ethernet standard. 'We all contacted people privately' around 2005 to gauge the need for a faster specification, said John D'Ambrosia, chairman of the new ad hoc group. 'We only got, like, seven data points.' Disagreement about speeds complicated the process of developing the current standard, called 802.3ab. Though carriers and aggregation switch vendors agreed the IEEE should pursue a 100Gbps speed, server vendors said they wouldn't need adapters that fast until years later. They wanted a 40Gbps standard, and it emerged later that there was also some demand for 40Gbps among switch makers, D'Ambrosia said. 'I don't want to get blindsided by not understanding bandwidth trends again.'"

cancel ×

117 comments

Build it (2)

linatux (63153) | more than 3 years ago | (#36080438)

& they will come

Re:Build it (0)

Anonymous Coward | more than 3 years ago | (#36080464)

Nah. Once you get to a certain point, extra speed becomes less and less important.

Diminishing returns and all that jazz.

Re:Build it (2)

crusty_architect (642208) | more than 3 years ago | (#36080492)

Yes thats what we said about 10Mb/s Ethernet in the 1990's...

Re:Build it (1, Interesting)

TheRaven64 (641858) | more than 3 years ago | (#36080556)

Did they? Because I remember finding 10Mb/s networks too slow in the mid '90s. Switched 10Mb/s networks made that a bit better, but often there was still a bottleneck. On the other hand, I've only found 100Mb/s too slow on a few occasions - maybe once per year. I've used GigE, but I've never come close to saturating it.

Like the grandparent said, it's a question of diminishing returns. 1Mb/s is fast enough for pretty much any text-based data. 10Mb/s is fine for still images, unless they're really huge raw photos (and even then, progressive loading probably means that you won't notice the bottleneck). 100Mb/s is fine for video - even HD. There will eventually be things for which 1000Mb/s is too slow, but they're going to be relatively uncommon. 100Mb/s will remain fast enough for all of the things that it's fast enough for now.

And once you get to 'fast enough', other things become more important. I turned off the switch in my last house once I realised that I hadn't used the wired network for several months, and my new house is entirely wireless. 54Mb/s and freedom from wires is more useful to me than 100Mb/s.

Re:Build it (4, Informative)

Luckyo (1726890) | more than 3 years ago | (#36080586)

Much of the talk is about operator and hub level, not end-user. As a result, terabit ethernet makes sense with numbers you present - provided specific hub serves enough clients.

Essentially it's a case of making internal ISP networks simpler to build.

Re:Build it (3, Informative)

smash (1351) | more than 3 years ago | (#36080592)

depends what you're using it for, doesn't it?

gig-e is still slow. sure it might be fine for a single desktop port, but...

hook it up to a SAN, and before you know it you're running into the limits of a few gig-e ports bound into an etherchannel.

storage requirements are going to continue to grow. HD video / audio is going to continue to become more widespread. if you're dealing with limited numbers of cables to carry data for large (and increasing) numbers of users, there's no escaping the need for more bandwidth.

Re:Build it (1)

nomaddamon (1783058) | more than 3 years ago | (#36080830)

It might take a while to get 1Gbs+ Internet to most homes, but for LAN i feel GbE as a bottleneck today.
When I use DLNA to stream HD content to 3 TV's (one in kitchen, one in living room and 1 or 2 in kids rooms) and use N spec wifi at the same time, the DLNA lags sometimes. By calculations there should be some bandwith left over but not much. The lagging is probably caused by unexpected overheads and GbE switches preforming at "GbE in theory" speeds, but with the world moving towards a phase where every single gadget/device is connected to LAN/Internet this will become a large problem shortly.

have you tried different tcp congestion control? (0)

Anonymous Coward | more than 3 years ago | (#36081402)

i'm wondering if youve tried different tcp congestion control? there are many many choices and options, some which appear to be appropriate; http://datatag.web.cern.ch/datatag/howto/tcp.html discusses things like increasing the transfer queue, although you might not want to do this, and even the opposite for particular qos queues. this line is interesting; "If the buffers are too small, like they are when default values are used, the TCP congestion window will never fully open up"

time flys; 10+yrs ago i had a modem sharing setup where anyone on the network could cause the modem to dial and then everyone on the network could share it until the connection was lost. the log could be processed later to count the number of ph calls each person was liable for and so the local calls were almost all accounted for where the remainder being divided between house mates. then a couple of years and houses later we'd just gotten adsl2+ and wanted to share it between 4 or so people in a large house, i had the router in my room and so experimented with the linux qos.

to this day i believe that isp's throttle via dns. allowing certain people to torrent as hard as they could without causing udp 53 packets to get dropped brings a massive improvement for everyone. then setting up a hierarchy of qos queues and tinkering, so that the outbound traffic is limited to just less than the actual bw and then allowing 53 and other realtime traffic the highest prio. but i cant say i've seen much actual benefit from congestion control, so i'm wondering if you or someone else has found benefit??

Re:Build it (1)

tlhIngan (30335) | more than 3 years ago | (#36084320)

It might take a while to get 1Gbs+ Internet to most homes, but for LAN i feel GbE as a bottleneck today.
When I use DLNA to stream HD content to 3 TV's (one in kitchen, one in living room and 1 or 2 in kids rooms) and use N spec wifi at the same time, the DLNA lags sometimes. By calculations there should be some bandwith left over but not much. The lagging is probably caused by unexpected overheads and GbE switches preforming at "GbE in theory" speeds, but with the world moving towards a phase where every single gadget/device is connected to LAN/Internet this will become a large problem shortly.

Or it might just be your server.

Uncompressed OTA HD is 20Mbps tops (per ATSC spec). 3 HD streams would consume a good chunk of Fast Ethernet, but there's still enough leftover. And if you go Blu-Ray, it also tops out around 15-20Mbps or so. If it's cable HD, you're lucky to get 6Mbps per channel.

The N wifi is probably the biggest consumer of bandwidth, but my general experience is it offers maybe just a bit faster performance than Fast Ethernet.

Now, if your DLNA server is serving up 3 HDTV streams and you're busy copying files to it over WiFi, it's probably your server and I/O throughput (those streams are causing the heads to skitter across the platters, and the fastest spinning rust can do is around 100 seeks/second).

Also, if your router is the one doing DLNA/file serving/packet routing (WiFi-wired), there's 95% of your problem right there.

Re:Build it (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36080610)

At least until it becomes very much cheaper, anything faster than gigabit is mostly about reducing the cable mess in high density situations.

For instance, if you are doing server virtualization, cheap multicore CPUs and cheap RAM means that it isn't at all implausible or uncommon to have numerous VMs all living in a single 2U, with the bandwidth demands of whatever it is that they are doing, plus the bandwidth demands brought about by the fact that there isn't any room for disks in there, so all their storage I/O is happening over iSCSI. You end up with every expansion slot filled with 4 port gigE cards and a real rat's nest.

Re:Build it (1)

lucifuge31337 (529072) | more than 3 years ago | (#36083204)

At least until it becomes very much cheaper, anything faster than gigabit is mostly about reducing the cable mess in high density situations. For instance, if you are doing server virtualization, cheap multicore CPUs and cheap RAM means that it isn't at all implausible or uncommon to have numerous VMs all living in a single 2U, with the bandwidth demands of whatever it is that they are doing, plus the bandwidth demands brought about by the fact that there isn't any room for disks in there, so all their storage I/O is happening over iSCSI. You end up with every expansion slot filled with 4 port gigE cards and a real rat's nest.

Try an ESXi cluster of a blade chassis of 16 servers, each running 10 or more VMs. The switch cross connects back to the core start to become a problem even at 10 GbE.

Re:Build it (2)

Zone-MR (631588) | more than 3 years ago | (#36080736)

54Mb/s and freedom from wires is more useful to me than 100Mb/s.

Except you'll not be seeing anywhere close to 54Mb/s actual throughput. You'll see around 20Mb/s, barely double the 10Mb/s Ethernet network that you deemed too slow in the mid-90's. Proves your point though that you're unlikely to need more in a home setup. Server data centres are a different story...

Re:Build it (2)

arth1 (260657) | more than 3 years ago | (#36081274)

Except that modern wireless access points and NICs do 54 Mbps on multiple channels.
Unfortunately, 802.11n is a marketing term, and can mean either 2.4 GHz, 5 GHz or both. Because consumers are cheapskates with little or no technical understanding and WAY too much faith in marketing, the trend is towards not offering 5 GHz band anymore, to save costs.

Hint: If equipment says a/b/g/n, it will support both, and you'll likely get 150 Mbps speeds (120 in reality). If lucky, you may even get 300 (230 in reality). If the equipment says b/g/n, on the other hand, don't expect much - 54 Mbps in most cases, and 150 if there's absolutely nothing else sharing the spectrum.

Re:Build it (1)

SuricouRaven (1897204) | more than 3 years ago | (#36082710)

Any more? I've only ever found one n-point that did 5GHz, and I had to shop around to get it. 5GHz is more expensive than 2.5GHz, and if consumers don't know what a gigahertz is then why should manuafacturers bother to support it?

Re:Build it (1)

El_Isma (979791) | more than 3 years ago | (#36080752)

Wifi-g actually doesn't provide 54Mb/s of effective BW, more like around 27Mb/s. Just FYI.

Re:Build it (0)

Anonymous Coward | more than 3 years ago | (#36080986)

54 40' or Fight!

54 Mb/s wireless to the END USER and 40 Gb/s to the HUBS sounds good as long as the 54 Mb/s is guaranteed unthrottled, unrestricted, and not bottle-necked along the lines. Obviously faster speeds would be had through the wires, but as TheRaven 64 mentions, those speeds should be fine for most uses.

Re:Build it (0)

hairyfeet (841228) | more than 3 years ago | (#36082784)

I can already name something where 100Mb/s is too slow...cloud computing. If we are gonna be moving everything to the cloud you are gonna be talking massive amounts of data flying back and forth and just backups can get pretty damned huge and take awhile at 100Mb/s.

Of course the big "gotcha" nobody is mentioning is that the ISPs are gonna cap the shit out of everybody so bad it won't matter what speed you have, you'll never be able to use it. If we don't get some sort of net neutrality and regulations so they can't charge 1000% profit on raw data you're gonna end up with sub 30Gb caps and $2.50 a Gb overage charges.

Re:Build it (1)

lucifuge31337 (529072) | more than 3 years ago | (#36083170)

On the other hand, I've only found 100Mb/s too slow on a few occasions - maybe once per year. I've used GigE, but I've never come close to saturating it.

This isn't about or for home users, or even small office users. It's about network operators.

In my small operation (under 100 servers, 3 1 gb internet connections) I have several places where I completely saturate 1 GB and have, for cost reasons, trunked it (10 GbE is very expensive still when you look at having to replace/upgrade core switching to support it). Switch cross connects and SANs are the biggest offenders. Trunking sucks (anything that requires more complexity and configuration is always worse than the simpler solution in my opinion), and getting higher speed ethernet connections into the top of the market will reduce the cost of 10 GbE as well as giving us (datacenter guys) somewhere else to scale when we need it.

Re:Build it (1)

TheRaven64 (641858) | more than 3 years ago | (#36083510)

This isn't about or for home users, or even small office users. It's about network operators.

Which is exactly the point. That's what diminishing returns means here. 10Mb/s is too slow for 90% of users. 100Mb/s is too slow for 10% of users. 1Gb/s is too slow for 1% of users. 10Gb/s is too slow for 0.001% of users. Each speed bump increases the number of people for whom it's fast enough. If you're designing a new 100Gb/s interconnect, it's going to be for such a small group of people (compared to the total set of computer users) that defining something backwards compatible with 100Mb/s Ethernet may not be worthwhile.

Re:Build it (1)

lucifuge31337 (529072) | more than 3 years ago | (#36083652)

it's going to be for such a small group of people (compared to the total set of computer users) that defining something backwards compatible with 100Mb/s Ethernet may not be worthwhile.

I'm not sure what you think I was responding to or talking about. You're bringing up a point that I completely agree with, but also one that I wasn't discussing in my post.

Re:Build it (1)

TheRaven64 (641858) | more than 3 years ago | (#36083940)

The point is the same one that I made in my original post, which you responded to...

Re:Build it (1)

X0563511 (793323) | more than 3 years ago | (#36085462)

... and his point is that you're looking at the bottom.

You need to look further up, where 1000s of those users are trying to cram data through your links. It adds up.

Re:Build it (1)

skids (119237) | more than 3 years ago | (#36083828)

Well, I think the ATM crowd should be allowed to say "I told you so" since "trunking" under ATM is fantastically simple since there are no reordering problems and as such no need for hashing and balancing.

ATM crowd, please step to the stage for due credit... ...crickets...

Oh right, everyone went for the technology they understood instead of the better one. Par for the course.

Re:Build it (1)

lucifuge31337 (529072) | more than 3 years ago | (#36084002)

ATM crowd, please step to the stage for due credit... ...crickets...

Oh right, everyone went for the technology they understood instead of the better one. Par for the course.

There is no possible rebuttal to this.

Hah, we pushed for gigabit in the 90s (0)

Anonymous Coward | more than 3 years ago | (#36084282)

Thanks to us, you had gigabit as an option in the last decade.

In the mid 90s, we had full duplex 100BaseT and had to box the ears of IT people who believed it was enough. We got 1GbaseT to the desktops before the end of the 90s, when PCs could already saturate 100baseT with an SSH connection, and now we've stagnated again because IT cannot see past the end of their ancient NFS servers. (Why would anybody need something faster or more reliable than our herky-jerky NFS and NIS servers?... sigh.)

I want 10GbaseT to the desktop, so we can get back to saturating disks over one link again. Right now, a laptop SSD or a trivially small RAID workstation has double or triple the bandwidth of a 1G link, so the link becomes the bottleneck during backups, replication, or development VM imaging.

Re:Build it (1)

Locke2005 (849178) | more than 3 years ago | (#36080584)

You obviously haven't downloaded enough porn lately... but yes, past a certain point, the servers become the bottleneck, not the network. If only we could get people to use IP multicast for live streaming...

I don't think you guys were listening (1)

crusty_architect (642208) | more than 3 years ago | (#36080442)

We all wanted 100G. 40G is a waste of time.

Re:I don't think you guys were listening (1)

petermgreen (876956) | more than 3 years ago | (#36080496)

AIUI the real issue is that 40 and 100 gigabit ethernet is just a low level (and as I understand it more efficient than packet level link aggregation techniques) system for aggregating 10 gigabit links. If you want 40 gigabit you need 4 fiber pairs (or 4 wavelengths in a WDM system), if you want 100 gigabit you need 10 fiber pairs (or 10 wavelengths in a WDM system).

40G/100G is the first time in the history of ethernet that the top speed hasn't been able to be run through a single fiber transceiver. Do you really want to be using up 10 fiber pairs when 4 would be sufficient?

Re:I don't think you guys were listening (2)

gbjbaanb (229885) | more than 3 years ago | (#36080564)

Do you really want to be using up 10 fiber pairs when 4 would be sufficient?

I would when 4 is no longer sufficient.

The cost of the cable is minor compared to the cost of laying it, so I can't help thinking 100Gb makes more sense overall.

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36080596)

What do people mean when they say that stuff about the cost of laying it?

I know nothing of physics or any sort of engineering, so I ask: is it difficult to plan for expandability in a way that doesn't require any effort other than just the running of cable itself?

Re:I don't think you guys were listening (1)

cbope (130292) | more than 3 years ago | (#36080740)

"Laying" in this context typically means buried cable, in other words medium-to-long or longer-distance runs. Even cable that costs $10's per foot, costs much more than that per foot when you factor in the heavy equipment needed to dig the trench and the manpower to physically lay the cable.

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36080798)

Because once cable is laid you can't run new cable through the same pipe in the ground.... (replacing the older cable if needed)

Re:I don't think you guys were listening (1)

machine321 (458769) | more than 3 years ago | (#36081320)

Typical Slashdotters, they know nothing when it comes to getting laid.

Re:I don't think you guys were listening (1)

X0563511 (793323) | more than 3 years ago | (#36085490)

Cable pulls might work well in a building through conduits, but it gets kind of difficult to pull a cable several miles...

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36082778)

Thanks for explaining.

Re:I don't think you guys were listening (3, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36080838)

Running the cable is the part that requires the effort, unfortunately. There are things that help(ie. if underground, lay a larger diameter conduit than you think you'll need; because you will end up needing it. Leave a fish line so that you can pull the next bundle through, etc.); but for anything longer than an in-room patch, the cost of getting more cable run can go up quickly. In building, you need to run the stuff so as not to damage any fire barriers(and ideally avoid having to tear up any walls...) Underground, there are the joys of trenching or pulling recalcitrant cables through existing pipes. If running on utility poles, the proximity to high voltage means you'll probably need linesmen, even though fiber is electrically harmless.

When you can, you 'plan for expandability' by pulling as many strands of fiber in a single bundle as they'll let you get away with. The cost of each strand is comparatively small. The cost of pulling a bundle, whether it be two strands or 128 strands, is comparatively huge. You then just leave the ones you don't immediately need dark until you do need them.

For very nasty runs(undersea cables, backbones of large landmasses, etc.) I'm told that there is some emphasis on designing new transmitter/receiver systems that can squeeze more bandwidth out of the strands you already have(when the alternative is laying another fiber bundle across the Pacific Ocean almost arbitrarily expensive endpoint hardware starts to look like a solid plan...) Such matters are well beyond my personal experience, though.

Re:I don't think you guys were listening (1)

MichaelSmith (789609) | more than 3 years ago | (#36081012)

I'm told that there is some emphasis on designing new transmitter/receiver systems that can squeeze more bandwidth out of the strands you already have

Yeah this was in the news today [zdnet.com.au] . It talks about 100Gbps per wavelength and 16Tbps in total.

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36082574)

Thanks for explaining, I really thought that larger conduits were an easy solution, but apparently they're not-that-easy. Thanks again.

Re:I don't think you guys were listening (1)

SuricouRaven (1897204) | more than 3 years ago | (#36082810)

"There are things that help"

Ferrets.

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36080700)

Cable laying costs are only significant when going between buildings. Within a datacenter, which is where most of the 40Gbps and 100Gbps kit will be used, fibre costs are tiny.

One of the big problems with the 40Gbps and 100Gbps designs isn't the cost of the fibre but the cost of the transceivers. That's going to be a big limiting factor for many businesses.

Re:I don't think you guys were listening (1)

Eivind (15695) | more than 3 years ago | (#36081594)

True enough, but there's a lot of cable already installed, and the cost of requiring new cable as opposed to being able to use the currently installed one is VERY high indeed, and the replacement-cost goes up even more if the new cable is thicker than the one it is replacing, since that can lead to needing new buried pipes since the new cables won't fit trough the old pipes.

And I don't see a compelling reason. A single current-day single-mode optical fiber is capable of transmitting 15 Tbit/s over 100 miles (more over shorter distances) if using state-of-the-art tranceivers, and experimental new fibers are up to 70Tbit/s over 250km.

Okay, those tranceivers are too expensive for home-use at the moment, but I don't need more than 1Gbps at home at the moment either. If I do in a decade or two, who says 10Tbit/s over a single optical fiber won't be cheap then ? (given that it's practical even TODAY) 20 years is a long time in network-technology. (what did a 100Mbps-link over a single km cost 20 years ago ?)

Home use? (0)

Anonymous Coward | more than 3 years ago | (#36083344)

This isn't about home use though; consider an office, a render farm or other heavily-multiuser environments. There are already real world situations where the network bandwidth is the bottleneck; it prevents a render farm adding more nodes, or forces a larger office to split their networks up (adding to complexity, hence reducing reliability and security).

Those are the needs that need addressing; if the consumer service providers aren't able to justify the expense of upgrading their infrastructure, they won't. That doesn't mean the capability shouldn't exist for those who *do* need it.

Re:I don't think you guys were listening (1)

Bengie (1121981) | more than 3 years ago | (#36081732)

Lay 10 fibers in preparation for 100gb, then team two 40gb using 8 of those 10. As 22nm yields go up and the tech leaks to into networking, prices will drop dramatically. Heck, Intel claims cheap consumer-grade 10gb NICs will be made with 22nm and we should see integrated 10gb cropping up in 2012.

In ~3 years, we should see 10gb NICs where 1gb use to be.

Re:I don't think you guys were listening (1)

cdpage (1172729) | more than 3 years ago | (#36083804)

Agreed,

Rather then 10 make it 12 or 16 even, 10 makes it future proof, 12-16 gives businesses a opportunity for other channels. The cost will go down as they implement anyway.

Re:I don't think you guys were listening (1)

petermgreen (876956) | more than 3 years ago | (#36083840)

If you are laying new fiber from scratch I would agree laying plenty of spare is a good idea given that the ammount we can cram down one fiber seems to be platauing somewhat (it hasn't completely stopped increasing but I'm pretty sure that 40/100 gigabit is the first time a new speed of ethernet has been unable to run down a single fiber at release)

OTOH a lot of places will be using fiber layed years ago. Back in the days when gigabit (which can easilly run on one fiber pair) was the new hotness even four pairs probablly seemed like plenty of spare. How many cabinets actually have 10 fiber pairs running to them?

Also even if you do have the fiber do you really want to pay for twice as many optical transcievers as you need.

100Gb makes more sense overall.

100Gb makes sense if you belive that your requirements for a link will pass 40 gigabit per second BEFORE a technology comes out at a reasonable (relatively) cost that can do more than 10 gigabits/second/fiber.

Re:I don't think you guys were listening (0)

Anonymous Coward | more than 3 years ago | (#36081054)

AIUI the real issue is that 40 and 100 gigabit ethernet is just a low level (and as I understand it more efficient than packet level link aggregation techniques) system for aggregating 10 gigabit links. If you want 40 gigabit you need 4 fiber pairs (or 4 wavelengths in a WDM system), if you want 100 gigabit you need 10 fiber pairs (or 10 wavelengths in a WDM system).

40G/100G is the first time in the history of ethernet that the top speed hasn't been able to be run through a single fiber transceiver. Do you really want to be using up 10 fiber pairs when 4 would be sufficient?

This is not accurate, 100GE can be transported in a single 50 Ghz spacing, the same used by 10GE/OC192c traditionally depending on the modulation used for sending by the optical equipment.

Personally... (0)

Anonymous Coward | more than 3 years ago | (#36080446)

Personally, ever since we moved beyond 10Mb I've been satisfied with LAN speeds. That's my $0.02 as an end user.

Bandwidth trends? (1)

Wolfling1 (1808594) | more than 3 years ago | (#36080474)

Ahh-hahahahahaha.... Moore's law guys. And before people flame me for misinterpreting the law, common usage is 'double the speed every 18 months'. It might be a misinterpretation, but its the most common usage in the world today.

When was the last time someone significantly increased hardwired bandwidth?

I gotta stop drinking red wine, and then posting on /.

Re:Bandwidth trends? (1)

somersault (912633) | more than 3 years ago | (#36080532)

It might be a misinterpretation, but its the most common usage in the world today.

Yeah, because being commonly believed makes something true *facepalm*

When was the last time someone significantly increased hardwired bandwidth?

I guess Firewire, USB, HDMI, DisplayPort, Thunderbolt, etc. If you're talking switches then I think there are 10Gbps ones available but they aren't necessary for most home users and businesses yet - anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay, and for most people right now, 1Gbps should be acceptable for backups and file transfers.

I don't give a crap about increasing local ethernet speeds right now - the majority of people would get far more benefit from better internet connections. Most people can't even get 100Mbps internet, let alone 100Gbps.

Re:Bandwidth trends? (1)

Anonymous Coward | more than 3 years ago | (#36081134)

anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay,

Not true for long. Infiniband EDR 12x is 300Gbit/sec. It's only a matter of time before that speed hits the desktop. The fastest single internal device you can buy [fusionio.com] currently goes 6Gbit/sec. You'd need a cluster linked via Infiniband to reach 300Gbit, probably around 9 nodes with 6 cards per node. It's definitely attainable.

Re:Bandwidth trends? (1)

somersault (912633) | more than 3 years ago | (#36081316)

That fusion IO thing is actually 6GByte/s, which is 48Gbit/s (unless they made a mistake with capitals on that page), but it's not exactly small business/consumer grade stuff! If you set up a RAID array then you're obviously going to be able to handle higher bandwidths, but such a setup is really superfluous and overcomplicated for the majority of PC users.

Re:Bandwidth trends? (0)

Anonymous Coward | more than 3 years ago | (#36082326)

^ troll

Re:Bandwidth trends? (1)

bunratty (545641) | more than 3 years ago | (#36081424)

It isn't Moore's law, but speed of networking does follow an exponential trend, as does capacity of hard disks. Maybe if you make a logarithmic graph of when 10 Mbps, 100 Mbps, 1 Gbps, 10 Gbps, and 100 Gbps Ethernet appeared you could estimate when 1 Tbps Ethernet should appear.

Forget "today" (1)

wjlafrance (1974820) | more than 3 years ago | (#36080530)

Who cares about our "needs"? Build the best cable you economically can today and it'll meet your bandwidth needs tomorrow. "SATA can only push 6 Gigabits, and that's from the drive to the motherboard!" is no excuse to not pass 6 Gigabits on your Ethernet cables. If you can push 600 Gigabits, that means the cable will serve you quite nicely for many, many years. I used to think that when I built / renovated my first house, I'd run Ethernet through the walls. Now, why bother? If I'd done this three years ago or so, I'd have likely used CAT5 and been pushing 100 Megabits reliably. By unplugging the Ethernet cable, I'd get 450% of that speed using the 802.11n MIMO tri-antennas in my MacBook Pro. The entire investment would be worthless because people built the cable needed then.

Re:Forget "today" (0)

Anonymous Coward | more than 3 years ago | (#36080642)

Or you could just plug gigabit equipment in and be transferring data far more quickly than any 802.11 standard can.

The additional wireless overheads plus encryption overheads mean it'll still be a long time before wireless can reliably surpass gigabit speeds.

Re:Forget "today" (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36080860)

You can get gigabit+ wireless today; but only from highly directional, fixed-location gear, usually marketed as a cheaper alternative to a redundant fiber path between buildings. A perfectly fine solution to the risk of backhoe-induced packet loss; but not exactly laptop ready.

The assorted 802.11 standards are substantially slower even in theory, and their quoted bandwidth numbers are usually absurdly inflated.

Re:Forget "today" (1)

smash (1351) | more than 3 years ago | (#36080696)

By unplugging the Ethernet cable, I'd get 450% of that speed using the 802.11n MIMO tri-antennas in my MacBook Pro.

no you won't.

not unless you have an airport in your lap as well. And it will be the 450 megabit shared between every device, rather than switched 100 meg per port.

Besides, If you were in any way cluey, you would have used cat5e, and be pushing gigabit.

Re:Forget "today" (1)

peppepz (1311345) | more than 3 years ago | (#36081102)

Who cares about our "needs"?

I believe that developing a "next-generation" standard costs time and money. They probably want to avoid investing millions to develop a technology that people won't buy quickly (perhaps due to the high price that the products would have at the beginning).

Re:Forget "today" (1)

beanpoppa (1305757) | more than 3 years ago | (#36083320)

Cat 5e has been pretty much the standard for at least 10 years now. If you had run that 3 years ago, you'd be easily pushing 1Gbps to each device on your network, rather than sharing, under ideal conditions, a maximum of 350Mbps between all your devices.

iPad (0)

Anonymous Coward | more than 3 years ago | (#36080546)

The iPad only supports wifi, so who cares about anything faster?

Re:iPad (1)

Chas (5144) | more than 3 years ago | (#36080650)

Basically anyone using a real computer with a real operating system. Toys and their vendors need not apply.

They should have asked meatloaf (0)

Chrisq (894406) | more than 3 years ago | (#36080574)

They should have asked meatloaf

You and me we're goin' nowhere slowly
And we've gotta get away from the past
There's nothin' wrong with goin' nowhere, baby
But we should be goin' nowhere fast

'We only got, like, seven data points.' (0)

Anonymous Coward | more than 3 years ago | (#36080588)

'We only got, like, seven data points.' - That's, like, wrong, Dude.

640 k... (2)

spectrokid (660550) | more than 3 years ago | (#36080726)

Sure this will be used in datacenters and in between them. But for the humble desktop, haven't we passed the "good enough" mark at properly switched, full duplex 100 Mbit? anybody here needs more than 100M on his office desk?

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36080764)

So I take it you've never had to backup a RAID array over a network.

Yeah, I'm just a basic humble desktop user, but saturating a Gbit link can be done quite easily with a couple SATA disks.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36080846)

yes. yes indeed.
In fact, I have (somewhat) less need for GzBps (GaZillion Bytes per second, you read it here first!) at work than at home (just built a ~10TB raid 6 array at home and need to copy all my "stuff" around, so that I can get more stuff, so I can build a bigger array, so that...., well, you get the idea).

Re:640 k... (1)

Chatterton (228704) | more than 3 years ago | (#36081658)

I got my personal server with a 3TB raid 5 array at home. And when it is backup time, my Gbps Ethernet card is white hot. My scenario is not about copying stuff to get more stuff but to just do backup of my stuff. I just do photography, and one photo could take up to 100 MB (1 x RAW (5616 x 3744 x 14bits), 1 x Color/Distortion corrected (5616 x 3744 x 16bits), 1+ x Edited (5616 x 3744 x 16bits), 2+ x JPGs at different resolutions). It is mostly OK because I launch the backups just before going to bed, but when the backups run my network is completly on its knees and unusable for anything else. That is to say that you don't need to do crazy thing to max out a Gbps Ethernet network in a non work environment.

Re:640 k... (1)

ledow (319597) | more than 3 years ago | (#36080796)

So your desktops are all 100Mbps (which, you're right, is more than adequate for general use).

So the switch they plug into has to have a 1Gb backbone (usually one per 12-16 clients for office-type stuff, or else you hit bottlenecks when everyone is online - but for everyone to have "true" 100Mb, you need a 1Gb line per 8-or-so clients).

Those 1Gb backbones (usually muliple) then have to daisy-chain throughout your site (and thus if your total combined usage is over 1Gb in any one direction, you're stuffed) OR you can give them multiple 1Gb lines or (in the future) a 10Gb line as backbone.

A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW.

The school I work in has a central database run from two bonded 1Gb connections on the server. I think the school would pay even today's prices for 10Gb if it was standardised and ubiquitous enough to be on servers and switches by default. That's a school with 150 machines, most of which can't even go on the database anyway (about 25 do) - hardly a huge demand or unusual circumstances.

We'd certainly pay for 10Gb ports on the servers and a single 10Gb port on a switch that we then load up with 10/100 and Gigabit connections to ensure the best throughput for those ports. That's for a quite-small database - we don't do video-streaming or anything heavy over that connection.

The Internet connection? Yeah, we only have 2 x 24Mbps anyway, so it's no use for that. But internal connections often shift a hell of a lot more data (how much data do you think is transferred if you do PXE booting, or centralised file/application storage on servers, etc.?). 10Gb should be available today - it's taking too long. 100Gb is around the corner and could be in every school in 10 years or so, especially with the push towards video-streaming/caching, etc.

It's hardly rocket-science type uses that apply here. An ordinary school could make good use of 10Gb today and even is beginning to have a *need* for it.

Re:640 k... (2)

jimicus (737525) | more than 3 years ago | (#36080928)

If you're running two bonded 1Gb connections from a database server to serve 25 users in a school and it's not fast enough, I can only think of two possible explanations:

1. It's a university rather than a school, and it's a big dataset being used for reasonably high-tech research.
2. Your problem is not the network.

Re:640 k... (1)

beanpoppa (1305757) | more than 3 years ago | (#36083394)

3. Your doing it wrong. I've seen poorly written database applications that have the client pull entire tables down to process them locally, rather than write proper queries.

Re:640 k... (1)

jimicus (737525) | more than 3 years ago | (#36083532)

That's more-or-less what I meant by "your problem is not the network" ;)

Re:640 k... (2)

BandoMcHando (85123) | more than 3 years ago | (#36081078)

The first bit sounds more like a design issue than a problem with network speed, if you're really saturating your uplinks in this way, and heavily utilising the network infrastructure, I suspect you might want something a bit more robust than the setup you have described.

"A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW."

To be honest, the price difference between a 24x10/100 + 2x10Gb and a 24x10/100/1000 + 2x10Gb would probably be so insignificant that people just wouldn't bother with either making it or buying it. The improvements in the step up from 10/100Mb to 1Gb are far more than just speed - proper standardised negotiation for a start, which is notoriously piss-poor on 10/100Mb. And those products already do exist, bit expensive, like $1.5-2.5k or something probably

"10Gb should be available today"

Er... it is? Heck, 40Gb is available today. Expensive admitedly, but most definitely available.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36081304)

The first bit sounds more like a design issue than a problem with network speed, if you're really saturating your uplinks in this way, and heavily utilising the network infrastructure, I suspect you might want something a bit more robust than the setup you have described.

"A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW."

To be honest, the price difference between a 24x10/100 + 2x10Gb and a 24x10/100/1000 + 2x10Gb would probably be so insignificant that people just wouldn't bother with either making it or buying it. The improvements in the step up from 10/100Mb to 1Gb are far more than just speed - proper standardised negotiation for a start, which is notoriously piss-poor on 10/100Mb. And those products already do exist, bit expensive, like $1.5-2.5k or something probably

"10Gb should be available today"

Er... it is? Heck, 40Gb is available today. Expensive admitedly, but most definitely available.

40g? Cisco just commercially released their 100g linecards for the CRS-3.

The computerparty "The Gathering" in Norway used these to achieve 100g internetconnection for ~5000 people.

Re:640 k... (1)

subreality (157447) | more than 3 years ago | (#36081126)

A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW.

The future is here! 10GBASE-T was standardized over 5 years ago, and fiber variants before that. Every major manufacturer's midrange fixed-config edge switch lineup has a 24/48 port 10/100/1000 switch with dual 10Gb uplinks.

Just a few examples:

http://www.cisco.com/en/US/products/ps6406/index.html [cisco.com]
http://www.extremenetworks.com/products/summit-x350.aspx [extremenetworks.com]
http://www.brocade.com/products/all/switches/product-details/fastiron-gs-series/index.page [brocade.com]
http://h30094.www3.hp.com/product.asp?sku=3981100&mfg_part=J9146A&pagemode=ca [hp.com]

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36081136)

So your desktops are all 100Mbps (which, you're right, is more than adequate for general use).

So the switch they plug into has to have a 1Gb backbone (usually one per 12-16 clients for office-type stuff, or else you hit bottlenecks when everyone is online - but for everyone to have "true" 100Mb, you need a 1Gb line per 8-or-so clients).

Those 1Gb backbones (usually muliple) then have to daisy-chain throughout your site (and thus if your total combined usage is over 1Gb in any one direction, you're stuffed) OR you can give them multiple 1Gb lines or (in the future) a 10Gb line as backbone.

A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW.

2001 called and wants their post back. There are literally hundreds of 10/100/1000 24 port switches with 2 port 10GE uplinks, the standards needed for these technologies were ratified in the 1990's and gluing them together does not constitute the need for a new standard. Here's an example [cisco.com] of your switch, one of the lowest models Cisco sells for campus / LAN applications.

Re:640 k... (1)

sirlark (1676276) | more than 3 years ago | (#36080806)

Yes, on my office desk I do. I work with large (TB+) data sets, which we need to make backups of, and generally multiple working copies on various colleagues computers. Working with the data directly over a 100Mbit network is impractical; in fact, having a single copy we all work on isn't a good idea either, because sometimes we modify the data, thereby clobbering it for others.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36080908)

Sure this will be used in datacenters and in between them. But for the humble desktop, haven't we passed the "good enough" mark at properly switched, full duplex 100 Mbit? anybody here needs more than 100M on his office desk?

I see hiccups in my virtual machine desktop which is on a remote vmware installation some 10 miles away in a data centre.

Re:640 k... (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36080934)

Yes and no. At home, the difference between 100mb and gigE only comes up on the rare occasion that I need to do a full backup of an entire machine. Most everything else is either local, or media streaming(and even blu-ray only supposes a maximum read-rate of 54mb/s, so uncompressed rips should work just fine over ethernet).

At the office, where basically everything but the OS is done from network storage, for backup and easy-availability-from-any-PC purposes, 100mb is OK; but for working on larger files you can definitely tell "Ok, that file is on my local drive" or "ah, that would be a network location..." It is hardly unusable; but 100mb is a bit too slow to make local and remote storage indistinguishable in practice(obviously, latency will forever be worse with distance; but with higher link speed the real-world experience of interacting with a file on a fancy SAN should be better than the experience of interacting with the same file on a lowest-bidder local disk...)

Re:640 k... (1)

KingMotley (944240) | more than 3 years ago | (#36084086)

Some of us have internet connections that are faster than 100mb.

Re:640 k... (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36084308)

My doctor tells me that thinking about people like you is bad for my hypertension, so I try not to...

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36085306)

100mb is a bit too slow to make local and remote storage indistinguishable in practice(obviously, latency will forever be worse with distance; but with higher link speed the real-world experience of interacting with a file on a fancy SAN should be better than the experience of interacting with the same file on a lowest-bidder local disk...)

On a lowest bidder disk being the key. When considering apples to apples storage, it's a different story and 1Tb, 10Tb, or even 100Tb can't to fix the problem.. Electrons travel at sub-light speeds, connections are not zero length, packetizing signaling/payload adds delay, and repeating devices add further delay. As such, until we can bring find a means of successfully utilizing quantum entanglement for data transmission, remote storage will always be slower than local storage.

Re:640 k... (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#36085356)

Oh, certainly, I'm not expecting a remote disk of given performance to perform equivalently to a local disk of the same performance, particularly for latency sensitive applications. However, at least for basic desktop stuff, the most noticeable lags are in the "poke 600MB file, twiddle fingers" case, rather than the "notice just a tiny bit of latency in everything you do" case. And, at least in our setup, each of the desktops has whatever unspecified-brand disk Dell felt was cheapest that day, while the network storage locations are all located on multiple shelves of 15kRPM SAS drives with substantial backing RAM. Were the desktops equipped with decent SSDs, or PCIe-attached ones, it'd be no contest.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36080946)

www.enjoy-sunshine.org

Re:640 k... (1)

drinkypoo (153816) | more than 3 years ago | (#36081166)

I have GigE at home and I use it. 100M can't keep up with even a crappy hard disk.

Re:640 k... (1)

lbates_35476 (901961) | more than 3 years ago | (#36081346)

If you had said 1Gb I might have agreed but only for now. Moving digital pictures, digital video, or any other rich content around is taxing even Gb Ethernet. The number one requirement that I see clients having is a connection that is fast enough to keep timely backups of their system on a network device. For now 100Mb just doesn't cut it. Gb Ethernet is adequate, but as the amount of data that users are keeping on their desktops and laptops explodes, only for now.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36081400)

anybody here needs more than 100M on his office desk?

Yes, I do. If you worked with large files (eg video), you would understand the need to improve speeds.

Actually, we should never be happy with "good enough". It's always possible to improve.

Re:640 k... (0)

Anonymous Coward | more than 3 years ago | (#36085532)

True. But unless your company or department is dedicated to video, users who move that kind of data on a regular basis are by and far the exception.

Re:640 k... (1)

WuphonsReach (684551) | more than 3 years ago | (#36084594)

In 2011, if you're still feeding 100Mbps to the desk for brand new installs, you're being incredibly cheap. 1Gbps ports are no longer that expensive. It's a difference of something like $10 vs $17 per port between 100Mbps and 1Gbps, and getting a decent 100Mbps switch is becoming more difficult. Hell, that statement was true going back as far as 2008 or 2009, when the lower end 24-port gigabit switches first dropped below $500. Not hard now to get a "smart" 48-port gigabit switch for about $800 ($17/port). It won't have all the management features of the high-end switches, but it also won't be a slouch.

As soon as you start moving gigabyte sized files around, you've just passed the point where 100Mbps still makes sense. Assuming you'll hit 70% of capacity, 100Mbps gives you 7 MB/s which means 146 seconds to transfer a GB worth of data. Switch to gigabit and even if you only manage 30MB/s, you've cut that 146 seconds down to 34 seconds.

(I generally see speeds in the 45-50 MB/s range on standard sized frame 1Gbps ethernet talking to SAMBA/CIFS. Which is a hell of a lot better then topping out at only 7-8 MB/s for 100Mbps ethernet. Jumbo frames would help a bit.)

In comparison, USB2 typically tops out at 20-30 MB/s, based on real world usage. If all you have is 100Mbps, then it's faster to copy the files to a USB2 device, walk them across the room, and plug them into the other system then to feed it over the network.

As soon as your users start touching files larger then 20MB, they're going to notice.

Yes, it's needed (1)

bertok (226922) | more than 3 years ago | (#36080904)

It may not be needed this instant, but there's no such thing as too much bandwidth. Just off the top of my head, I can think a whole bunch of reasons one would want terabit Ethernet:

- For High Performance Computing and Database Replication -- both of these can result in systems that have performance that is almost entirely limited by the network, or very careful (expensive) programming is required to work around the network. Think about Google's replication bandwidth requirements between data centers! Cloud computing providers will have similar problems.
- Latency sensitive computing -- n-Tier applications like SAP have CPUs waiting for the network an awful lot. Users have to put up with multi-second response times because of the chattiness of the RPC protocols between the layers. Faster networks have lower latency, and when microseconds count, there's no such thing as too much bandwidth, even if the bandwidth isn't utilized.
- Converged Networking -- lots of people are merging their Ethernet and Storage (FC) networks, using iSCSI or FCoE. Fibre is already at 8Gbps, and SSD disks are going to create a situation where the disks have many times the speed of the interconnect. Note that bandwidth goes up as the IO response times drop, and we're about to see a drop from ~3ms for 15K RPM disks to under 1 microsecond for next-gen enterprise SSDs! SAN vendors are going to want 100Gbps ports soon, which implies 1Tbps aggregation ports.
- Bladesystems -- even today, a chassis can take 12-16 blades, each of which has 20 cores at around 3 Ghz. That's the equivalent of 1THz of aggregate computer power! The uplinks can become bottlenecks, especially when they are used for both storage and data.
- Movie and TV Studios -- there are digital movie cameras just around the corner that can capture 260Mpixel images at 24fps. That's something like 300Gbps if transmitted uncompressed! Throw in stereo, multiple angles, and then 1Tbps starts to sound like a good idea.
- On-Demand TV -- the aggregate bandwidth requirements of millions of households watching 4 hours of TV a day is just insane. Even with clever replication and multicast technologies, serious bandwidth is required to enable everyone to watch whatever they want, whenever they want.

Remember that networking is more or less fungible -- interconnects are all about moving bits about. At least in principle, almost any data cable could be replaced by Ethernet, or any similar technology. This 'unification' of networking is an ongoing process: Thunderbolt merged PCI-e and DisplayPort, Ethernet is starting to replace Fibre Channel, USB has eliminated a whole bunch of ports and cables, etc...

With that in mind, think of 1Tbps Ethernet not as something you'd plug a file server into, but as the interconnect between core switches for metro networks that feed 1Gbps into every house, or the campus uplinks for when 10Gbps to the workstations becomes reasonably common, or a link used by dozens of specialists to perform telepresense surgeries around the country from one central location, or things we haven't even thought of yet...

Zetta! (0)

Anonymous Coward | more than 3 years ago | (#36080972)

Terabit?? Terabit?! Gimme Zettabit Ethernet, give me sex...tillion bits per second, baby!

Re:Zetta! (1)

youn (1516637) | more than 3 years ago | (#36081500)

Terabit?? Terabit?! Gimme Zettabit Ethernet, give me sex...tillion bits per second, baby!

you probably mean titillions per second ;)

Say no to 40Gbps (1)

Blade (1720) | more than 3 years ago | (#36081382)

Come on guys. Powers of 10! You can't be going and moving from my powers of 10 wired Ethernet speeds, how will I do the simple math!

1 -> 10 -> 100 -> 1000 -> 10000

Easy maths! Say no to 40Gpbs.

pimp my ethernet (1)

Skapare (16644) | more than 3 years ago | (#36085216)

What we should have had all along was a system by which ethernet could dynamically adjust its speed in smaller increments to match the existing wiring capacity, both in terms of bit signaling rate on a pair of wires, to how many pairs are used (e.g. if I use 16 pairs from 4 parallel Cat 7 cables, it should boost the speed as much as it can and use them all in parallel). Of course actual devices can have limits, too, and the standard should specify the minimums (like at least 4 pairs required, additional pairs optional).

Sure, we need some new tech to get 1000 gigabit/sec. Fiber, no doubt. Multiple fiber? Better modulation? But whatever is done, THIS TIME they need to not set limits. Set a minimum and define the means/protocol for working up to even higher levels. And make this protocol one that can retrofit into older PHY layers so my 1 gigabit/sec network can run at 1.6 gigabit/sec or better if my cables and connectors are nice and clean.

Stop making new things (0)

Phat_Tony (661117) | more than 3 years ago | (#36081514)

Not literally on the "new thing," but stop making competing ports. Start and then end the next generation port format war as quickly as possible, and everybody get on board with either USB3, Firewire 3200, or Thunderbolt as quickly as possible. Computers should have one row of identical ports that work with everything. We need to get over the idea that certain 1's and 0's need a different shaped plug than others.

Re:Stop making new things (1)

Bengie (1121981) | more than 3 years ago | (#36083388)

Just wait when Thunderbolt hits 40/100gb. I could see stacked switches using TB for cheap uplinks

Target requirement (1)

Richy_T (111409) | more than 3 years ago | (#36081670)

The user should notice no delay or lag anywhere, performing any task. This goes not only for bandwidth but operating systems and applications.

Obviously there are physical limitations and ultimately, there are compromises to be made but the above should be a design goal always.

Slow ethernet isn't the problem (0)

Anonymous Coward | more than 3 years ago | (#36081726)

I won't notice (and sure as hell won't pay the premium for) a huge increase over my cat6 home connection for anything I do shy of backing up a complete drive in a few seconds instead of just scheduling it while we're at work.

The problem isn't on my side of the gigabit switch. Find me a carrier that can get close to the limit for even the ancient 10/100 spec let alone a gigabit, then we can talk about a possible faster ethernet.

no speed is too fast (0)

Anonymous Coward | more than 3 years ago | (#36082692)

the faster the better, then Telecoms won't be able to nickle and dime us the whole way into the future.

And we won't have to hear their bullshit about content "hogging their pipes"

why is terabit needed ... (0)

Skapare (16644) | more than 3 years ago | (#36085102)

... when the ISPs have barely even scratched the surface of getting megabit to the home.

What the IEEE needs to work on is technology that makes it easier to bring a few hundred megabit to the home. Whoever it was that said no one needed any more than 640kbits to the home was an idiot.

Spoilt Kids! (2)

morgauxo (974071) | more than 3 years ago | (#36085144)

In my day we carried our own packets. 10 miles! In the Snow! Uphill both ways!
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...