×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Visualizing Ethernet Speed

Zonk posted more than 7 years ago | from the poor-rat dept.

140

anthemaniac writes "In the blink of an eye, you can transfer files from one computer to another using Ethernet. And in the same amount of time, your eye sends signals to the brain. A study finds that images transferred to the brain and files across an Ethernet network take about the same amount of time." From the article: "The researchers calculate that the 100,000 ganglion cells in a guinea pig retina transmit roughly 875,000 bits of information per second. The human retina contains about 10 times more ganglion cells than that of guinea pigs, so it would transmit data at roughly 10 million bits per second, the researchers estimate. This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

140 comments

So if I plug enough CAT5 cables into it... (5, Funny)

Siguy (634325) | more than 7 years ago | (#15803385)

...I can use my guinea pig as a router?

Re:So if I plug enough CAT5 cables into it... (5, Funny)

evanbd (210358) | more than 7 years ago | (#15803417)

Well, apparently you have to plug the cables into its eye sockets, and cramming more than one into each is probably hard. So more of a bridge than a router, I think...

Re:So if I plug enough CAT5 cables into it... (0)

iced_773 (857608) | more than 7 years ago | (#15803505)


Actually, if he means router in the strictest sense, the guinea pig could be a router for a dead-end network. However, finding an IOS for that specific architecture may be a little tricky...

Re:So if I plug enough CAT5 cables into it... (2, Insightful)

ThJ (641955) | more than 7 years ago | (#15803698)

I guess the human eye must use some hellishly good compression. I suspect the human eye of having more 'pixels' than good DV camera. PAL DV resolution is 720x288 at 50 fields/second or 720x576 at 25 frames/second. The human eye has 1 type of luma sensor, plus 3 types of chroma sensors. Humans have fewer sets of chroma sensors, let's assume a quarter of the number of luma sensors (similar to what YUV 4:2:2 assumes) If we assume a very poor human eye with this kind of resolution, we get:

720 * 576 = 414,720 luma
414,720 / 4 = 103,680 chroma
414,720 + 103,680 = 518,400 elements

Let us assume a poor human eye that can see only 25 frames/second in bright light conditions.

518,400 elements * 25 FPS = 12,960,000 elements/second

Add to this that the human eye probably has at least 8 "bits" of color resolution. How is an eye nerve that only transmits 8 "bits" a second going to convey even this relatively poor quality image?

Re:So if I plug enough CAT5 cables into it... (1)

jZnat (793348) | more than 7 years ago | (#15803714)

The brain somehow reconstructs missing information from the images to give a much smoother picture. It's quite interesting, actually.

Re:So if I plug enough CAT5 cables into it... (4, Insightful)

Bonker (243350) | more than 7 years ago | (#15803853)

Your numbers are probably a bit high.

Remember that humans don't see a pixel-per-pixel representation of the world. We see a tight spot of color and detail in the center of our retina (Fovea? Bio-types please correct me) surrounded by blurry shapes and lines. Around the edges, in peripheral we don't even see color, just luminance.

Proof? Take a bright LED lamp and move it into your peripheral vision. What color is it, not from memory, but just from looking at it?

The Fovea-area of the retina is more densely packed cells and blood vessels. It has more cones -- Chroma-type cells -- than rods.

This indistinct image is inverted and processed into a whole by the brain, which carefully processes different shapes, lines, movement, flickering, and what-not to produce what you THINK you see. The brain fills in any given pieces of the image that don't have enough detail, frequently from memory.

This is why optical illusions work. You deceive the biological mechanisms that process the image into producing bad data by giving them a skewed sample of the image.

Also, neural mechanisms are asynchronous and really can't be mesured in a k/s rate. You perceive a flicker of motion one second and then a spot of color the next. This is assembled into a ball that you turn to face-- to get a better image-- and then catch. Your brain has a lot of built-in firmware to do image manipulation, built you have to 'learn' the software necessary to do pattern matching and response over your lifetime.

You only get a few bits worth of information for the first few milliseconds that you're recognizing the ball, and then many megabytes worth the last few second.

Another thing... as sensitive and immersive as vision is, your ears probably have much, much more data input. They have vastly more dynamic range. Most people don't even notice themselves filling in visual information with audio information, but it does happen.

For example, you hear a person's voice, and you *think* you see their face.

Close your eyes when talking to someone, especially when in a group. Note how easy it is to visualize faces just from hearing voices.

I'm not denying that the brain has massive throughput from the senses, but you really shouldn't try to measure it in digital terms. It's all analogue.

Re:So if I plug enough CAT5 cables into it... (1)

ThJ (641955) | more than 7 years ago | (#15804066)

Yeah. I was waiting for somebody mention that. I'm not really calculating in digital terms. I say pixels and mean "picture elements". Those might as well be analog. I don't say they're all placed in a perfect 4:3 image like on a CCD either. I might've made a couple of invalid assumptions: That the human eye has a higher number of pixels than a video camera, and that the chroma:luma ratio is the same for the human eye as for a video camera. I still say 8 pulses/sec for one nerve sounds pretty low. I was expecting a higher number. One possibility though is that they're so tightly packed and fire off as differing times, thus "diffusing" this low individual nerve framerate into a higher total. Like television scanline interlacing but done as a diffusion dither instead. And I guess the cells could be intelligent enough to not fire too often if light levels remain constant over time. Etc, etc.

Re:So if I plug enough CAT5 cables into it... (2, Interesting)

Kadin2048 (468275) | more than 7 years ago | (#15804087)

As other people have pointed out, I think your estimates of the data that's actually being sent 'down the wire' from the eye to the brain is probably very high.

This came up in another discussion a while back, but I suspect that even an average digital camera with a good, wide-angle lens probably captures in a single frame more raw information than the human eye does from the same vantage point in a single glance.

You only think that your eye is a really good camera. In reality, it might be pretty bad -- I suspect that if you could watch the "raw feed" from a human eye on a TV screen, you might find it rather disappointing without all the postprocessing done in the brain's visual cortex. Only a small part of it near the center would be high resolution; only the center region would be color, and the periphery would be just good enough to detect movement, not much else.

I guess in a way you could call this "compression," but in reality it's more a credit to the brain and the way the 'receiver' is designed, to create the feeling of a huge, high-refresh-rate, 180-degree, full color, 3-D panorama, from not particularly impressive source imagery.

Basically, the human visual system makes up for the limitations of its cameras (the eyes) in post. In the synthetic machine world, we do not currently have the processing power nor the software necessary to do the kind of synthesization that the brain does, so instead we give the machines better 'eyes,' because building cameras is something we do know how to do.

A while ago, I heard someone who was involved in machine vision talk about something called the 'picket fence problem.' A person can pretty easily assemble a good idea of the scene on the other side of a picket fence, or a board with a few holes in it, by moving their head back and forth and then re-assembling the narrow-angle views into something more comprehensive, all in real time. I'm not sure whether machine-vision is there yet today (this was quite a while back), but it's a pretty non-trivial process, or so I was led to understand.

More interesting than the 'bitrate equivalence' of the optic nerve, would be some sort of estimation of the "processing power" done by an average person's visual cortex while doing some basic visual activity. I suspect that the result might be surprisingly high, maybe bordering on what would be supercomputer levels right now, solved using current methods.

Re:So if I plug enough CAT5 cables into it... (1)

ThJ (641955) | more than 7 years ago | (#15804130)

See my reply above. I give in, don't hit me. *grin*

Re:So if I plug enough CAT5 cables into it... (1)

Kadin2048 (468275) | more than 7 years ago | (#15804202)

I wasn't bashing on your calculations, they were entirely logical if you think about the eye as a 'digital camera' sort of device ... which I think is how a lot of people do think about it. (Actually when I used to sell cameras, once upon a time, I used to get people asking questions like "what film speed is my eye?" So I expect that people today are probably asking how many megapixels their eye has ... when they really mean 'how many megapixels does my brain make me think that my eye has?')

It's pretty fascinating stuff when you get down to it.

Re:So if I plug enough CAT5 cables into it... (2, Funny)

Iron Condor (964856) | more than 7 years ago | (#15803665)

Actually, if you wrap some duct tape around it you can jam a lot of stuff into guinea pigs...

Ahh, the joys of Sysadmin's Day... (0)

Anonymous Coward | more than 7 years ago | (#15803607)

You can *try* but the ping times are terrible.

Maybe you have to give them an exercise wheel?

Re:So if I plug enough CAT5 cables into it... (0)

Anonymous Coward | more than 7 years ago | (#15803676)

hahahah

Re:So if I plug enough CAT5 cables into it... (1)

Lord Kano (13027) | more than 7 years ago | (#15803765)

Use BNC, it doesn't have the bandwidth for Cat 5.

Besides, cats eat guinea pigs.

LK

Security Vulnerability (-1, Offtopic)

QuantumFTL (197300) | more than 7 years ago | (#15803388)

Secret questions are only as secure as the secret itself - if you just gave that answer off to some web site, what's to stop you from giving it to another? Imagine this - you have an account of someone you want to break into, and you know their email address. You send them an email (tailored to not be like spam at all) inviting them to some special promotion on a site you set up, complete with login and the same security question. Anyone who answers this, poof, they have given you access to whatever account it is that you seek.

Re:Security Vulnerability (1)

Durrok (912509) | more than 7 years ago | (#15803479)

It's the article above this one. Someone just got back from the bar methinks ;)

Re:Security Vulnerability (1)

jboker (990329) | more than 7 years ago | (#15804868)

With the new ettercap eyesniff plugin i see what you see.

ps
stop looking at that.

Wrong article! (1)

QuantumFTL (197300) | more than 7 years ago | (#15803829)

I often wish that there was a "-1, Stupid" moderation, but this is the first time I wish it would be applied to one of my comments.

Yeah, that will teach you to lick your boyfriend's (1)

TrisexualPuppy (976893) | more than 7 years ago | (#15803872)

BALLS.
 
Suck it, queer. That's what you deserve. a -1 moderation for being a TOOLCASE. Read the frickin article first and not the summary. Nice tryng to fp for karma. DIDNT WOKR DID IT!?!? IDIOT

Break out the SI units, we have a new one (0, Flamebait)

LiquidCoooled (634315) | more than 7 years ago | (#15803398)

Terminal porn velocity.
I am quite certain members of slashdot are currently undertaking to break this new barrier as we speak.
Just watch out for the sonic boom when you accomplish your goal.

Perfection. (0)

Anonymous Coward | more than 7 years ago | (#15803426)

"This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second."

You forgot to add...in a perfect world. Does the visual system suffer similiar problems?

Re:Perfection. (0)

Anonymous Coward | more than 7 years ago | (#15804100)

It's far from a perfect world, gigabit Ethernet can easily move 10 million bits per second, and 10GigE can nearly handle 200 million bits per second.

Nice comparison (4, Interesting)

Anonymous Coward | more than 7 years ago | (#15803447)

This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second.

Yes, but we have better encoding.

Re:Nice comparison (0)

Anonymous Coward | more than 7 years ago | (#15803638)

Yes, but we have better encoding.

Says who?

Re:Nice comparison (1, Interesting)

Anonymous Coward | more than 7 years ago | (#15803677)

A 1024x768 image at 24bpp is 18,874,368 bits. Obviously if this article is correct our brain is doing compression. Say the max resolution of the human eye is 576 megapixels [clarkvision.com] and the max bpp is 48. Therefore the largest image size would be 27.6 gb. This would be a compression ratio of 2765:1
   

Re:Nice comparison (2, Informative)

SnowZero (92219) | more than 7 years ago | (#15804042)

It's not so much compression as having variable resolution. The center of the retina has a much higher resolution than the periphery. Try to look 20 degrees away from some words and try to read them (without looking back). It takes some practice to focus your attention a different place than where you are looking, but it's not too bad once you get the hang of it. You'll notice that there is very little detail indeed beyond about 5 degrees from the center of focus. The problem with TV and monitor displays is that they don't know where the viewer(s) are looking, so they need high resolution everywhere. Thus the eye can get away with a much lower overall information rate in comparison.

I do find that 10Mb to be a little low as an estimate however, since each cell can likely provide more than 10 bits/sec of information, especially when it can fire up to 1k times per second. There is almost surely less than 1000 bits/sec/cell though, so its somewhere between 100Mb and 1Gb overall -- still ethernet speeds.

There is a only a little bit of processing/compression going on at the eye itself; Mainly some "center surround" processing that you can roughtly compare to an edge sharpening filter in an image editing program. Most of the real processing goes on in the occipital lobe in the back of your head, such as the V1 layer which is essentially a 2D computer, with edge finding at various angles for every location in the eye. That feeds into V2 and V3, which do quite a bit more processing and are a little less well understood. One interesting thing is that it seems our visual systems split up into "what" and "where" pathways that independently process the identity of objects and their position. Some individuals with localized brain injuries can tell you they see an objecy (such as a stapler), but can't tell you where it is, and vice versa with an injury to the "what" pathway. Overall the human visual system is absolutely amazing, and we have a long way to go to catch up with it.

P.S. I know a lot about the human visual system because I have done a fair amount of computer vision research, and graduated with a double major in Cognitive Psychology (aka how the brain works).

Re:Nice comparison (1)

arminw (717974) | more than 7 years ago | (#15804925)

.....Overall the human visual system is absolutely amazing, and we have a long way to go to catch up with it......

Even so, there most smart, intelligent scientists try to tell us that this amazing system came about without the application of a superb designer of great intelligence, but by some random statistical processes.

Re:Nice comparison (1)

arminw (717974) | more than 7 years ago | (#15804912)

.......Therefore the largest image size would be 27.6 gb. This would be a compression ratio of 2765:1.......

The cells of the retina do a large amount of very complex signal processing before they pass the optical information onto the brain where it is interpreted. Thus the amount of data the optic nerve must carry is much less than what the visual acuity of the eye receives. Each retinal cell has a sophisticated biological CPU built into it. The compression is nowhere near the same as any digital compression we know about, since the computational power applied to each pixel, on the molecular level is astronomical. No needed information is lost, but the data sent to the brain is very dense. In addition, as pointed out below, the resolution of the image is not uniform, but more dense at the center. The eye also ahs a huge dynamic range, all the way from sensing a single photon to a sunlit snow scene.

Re:Nice comparison (1)

bcmm (768152) | more than 7 years ago | (#15805028)

Hmm... We don't just use variable resolution. We also do the first stages of processing in the retina, so the comparison could be likened to transfer of vector versus bitmap images.

Neuroscience != Computer Science (5, Informative)

KerberosKing (801657) | more than 7 years ago | (#15803460)

I am not sure that thinking of signals from the eye to the brain work the same way as computer networks is very helpful. I don't think that there is the same sort of contention in a nervous system as there is in ethernet. Synapses as we understand them today do not appear to have any sort of collision detection. Neurons may have tens of thousands of other neurons that they are connected to in a many-to-one configuration and the whole process is analog, which is very different than ethernet frames. Also a single ganglion cell may send "10 million bits" of information, but the optic nerve is made of many such cells in parallel. I would not be surprised if our current estimates are wrong by at least an order of magnitude.

Re:Neuroscience != Computer Science (1)

TubeSteak (669689) | more than 7 years ago | (#15803482)

I would not be surprised if our current estimates are wrong by at least an order of magnitude.
An order of magnitude which way?

I'm just trying to figure out if I should have dodged that station wagon full of porn DVDs or not.

collision detection (1)

pikine (771084) | more than 7 years ago | (#15803514)

Synapses as we understand them today do not appear to have any sort of collision detection.

You don't need collision detection if the connection is end-to-end and one way. The reason why wired and wireless ethernet have collision detection is because multiple interfaces are accessing the same channel. If you have multiple eyes on the same optical nerve, you would need collision detection.

Re:collision detection (2, Informative)

Anonymous Coward | more than 7 years ago | (#15803614)

Just to pedantic here, "wireless Ethernet" does not use collision detection (CSMA/CD). it uses collision avoidance (CSMA/CA - i.e. 802.11), collision mitigation (CDMA - i.e. Navini, etc...), collision prevention (TDMA, polling, and their scheduling kin - i.e. Canopy, etc...), or it's simply FDD (modern expensive point-to-point, or old-school EoAMPS).

Even current wired Ethernet versions (1G, 10G) have dropped collision detection, opting to go full-duplex exclusively. Also shared cables can now carry multiple different signals without interferance, thanks to things like DWDM.

-l

Re:Neuroscience != Computer Science (1)

indrax (939495) | more than 7 years ago | (#15803620)

Also a single ganglion cell may send "10 million bits" of information,

Re-read researchers said, and consider that maybe they've looked into this.
If we take the article at face value, and divide, it would indicate that a single ganglion cell carries about 8.75 bits per second. So you're suggesting that they are off by several orders of magnitude.

Despite the differences in how brains and computers relay information, it is perfectly valid to estimate any kind of transission capacity in bits. How else would you have them measure information? The fact is that it is possible to look at a meat computer and figure out how much processing and communication it is doing.

If you think this estimate is so wrong, why? How would so much information get from the eye to the brain? How and why would they eye generate so much information to represent a relatively limited visual field?

Re:Neuroscience != Computer Science (1)

Teresita (982888) | more than 7 years ago | (#15803640)

indraq wrote:
If you think this estimate is so wrong, why? How would so much information get from the eye to the brain? How and why would they eye generate so much information to represent a relatively limited visual field?
How come when people read a book or a monitor they have to move their eyes around? There's a sweet spot in the center of your vision were all the heavy lifting is done as far as processing the information goes. When they are talking about 100 kerjillion bits per second, it refers to feminine intuition which is massively parallel.

Re:Neuroscience != Computer Science (2, Interesting)

davidsyes (765062) | more than 7 years ago | (#15803693)

The eyes have it....

Well, I was all eyes for the article I partly read yesterday or early this am.

Well, even IF the eyes transmit like a network, the eye study is not apples and apples. More like oranges and mangoes.

How many libraries of eye sockets worth of information is that?

I mean, look at the size of the Guinea pig's eye. Of COURSE it transmits less energy. I mean more data to the brain. It's not as if it enlarges to accommodate more data. Hell, the human eye is probably 10 times LARGER without expanding. But, I suppose if the eyes DID expand when more data rate was demanded, such information overloads would lead to a whole new meaning of eye-socket-to-yah....

http://www.pimms-pages.co.uk/ [pimms-pages.co.uk]

And, Guinea Pigs aren't pigs of any sort. It's a terrible name to give something that a real pig could kill just by rolling over it.

http://www.oink.demon.co.uk/pets/guinea.htm [demon.co.uk]

What *I* wanna know is how the eyes of bats comaare to the Guinea Pig and the Chupacabra. And to hell with human eyes. I'm talking about the MOVIE "Chupacabra".

Re:( Neuroscience + Spirituality) !~ CS (1)

Dzonatas (984964) | more than 7 years ago | (#15803744)

One obviously forgets about spirituality. Science has made some steps to set limits to where about "the spirit" is within the body, and they don't believe it is just the brain anymore with the newer theories of quantum physics. There is the duality with quantum particle that appear to exists in two physical locations at once, but are still considered as one object. Quantum theorists have also suggested that "the spirit" actually is like that in which a part is in every cell like a node. On that note, the brain is just a computer and not thought of as sentient. I won't go into much details about those theories, but that would mean a single cell is able to transmit a much greater amount of information to "the spirit," for example, by quantum nodes than just to the brain.

WOO HOOO 8th post (0)

Anonymous Coward | more than 7 years ago | (#15803464)

That's all. Just had to say it. Nothing else to see here.

Inaccurate blurb. (2, Interesting)

pikine (771084) | more than 7 years ago | (#15803472)

anthemaniac writes:

"In the blink of an eye, you can transfer files from one computer to another using Ethernet. And in the same amount of time, your eye sends signals to the brain. A study finds that images transferred to the brain and files across an Ethernet network take about the same amount of time."

The amount of time you transmit data over a network depends on round trip time and bandwidth product, which determines TCP window size that optimizes the send/ack of data packets. You also need to take collision into account.

The ganglion cells are probably more analogous to link transmitter. The measurement is on the amount of information generated by these cells per second. The proper conclusion is that you could probably use ethernet to connect the eyes and your brain, and the required bandwidth is supported.

Re:Inaccurate blurb. (1)

koterica (981373) | more than 7 years ago | (#15803652)

It's worse than that! "In the blink of an eye, you can transfer" I can't see when I am blinking!

The consciousness time lag (1)

PapayaSF (721268) | more than 7 years ago | (#15803810)

The article also doesn't mention the other crucial time factor, beyond transmission speed: the lag between unconscious perception (when the signal from the eye has reached the brain) and conscious perception (when you are aware that you see something). This lag of roughly half a second was first measured in the 1970s by psychologist Benjamin Libet. We don't sense any lag, though, because we automatically antedate the experience of our sensory inputs, pushing it all a half-second into the past, and thus experiencing everything as "now" even though we are actually a half-second behind. A good book about all this is The User Illusion: Cutting Consciousness Down to Size [amazon.com] by Tor Norretranders.

This could have something to do with the deja vu experience: something goes wrong in the brain, and we somehow sense that half-second lag in a way we normally don't. In other words, you did "see that before," but the "before" was only half a second ago!

Re:Inaccurate blurb. (0)

Anonymous Coward | more than 7 years ago | (#15803930)

That's the dumbest thing I've heard in a while - Ethernet having anything to do with TCP window sizes I mean. Jeez - just because we're stuck with an inferior higher level protocol (just like telephone switches staffed by humans wasn't the most efficient way of utilizing copper - witness Intel's latest 10GE over copper duda), doesn't invalidate the observation that the guinea pig brain is able to process the data coming over the wire in a more useful way than most current computer systems..

Re: Inaccurate blurb -- yours or the article? (0)

Anonymous Coward | more than 7 years ago | (#15804185)

"In the blink of an eye, you can transfer files from one computer to another using Ethernet. And in the same amount of time, your eye sends signals to the brain. A study finds that images transferred to the brain and files across an Ethernet network take about the same amount of time."
The amount of time you transmit data over a network depends on round trip time and bandwidth product, which determines TCP window size that optimizes the send/ack of data packets. You also need to take collision into account.
This an odd response considering they aren't really talking about congestion avoidance windows or anything -- they were simply oversimplifying an overall data transfer rate by talking generically about files. No clue of the size let alone protocol. You assume TCP and bring of collisions when not everything is done with TCP and collisions are pretty much a thing of the past. I can't remember the last time I saw a half duplex ethernet network connection and while you may see backpressure and dropping of packets on switched networks you aren't going to see collisions.

The ganglion cells are probably more analogous to link transmitter. The measurement is on the amount of information generated by these cells per second. The proper conclusion is that you could probably use ethernet to connect the eyes and your brain, and the required bandwidth is supported.
Yup the ganglion cells would be like the transmitter but if you are generating and transmitting the data then it pretty much needs to flow over some sort of connection(s) to the brain, right? The aggregate speed of the connection(s) is being compared to that of ethernet. What's wrong with that?

From the article:
This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second.
And I though gig to the desktop was all the rage these days...

well well well (1)

mseidl (828824) | more than 7 years ago | (#15803501)

but, my eyes aren't encoding with divx ;)

So, if what I see is roughly the same throughput as ethernet.

Maybe I should rub my eye against a cat 5 cable connected to a computer!

Re:well well well (1)

Benji Mack (983108) | more than 7 years ago | (#15804898)

Actually, "encoding with divx" is surprisingly close to what your eyes are doing with every scene and every image they look at. There are neural mechanisms in the eye that perform edge detection and motion detection, not bothering to transmit information about areas of a single colour, so that the information that is actually transmitted to the brain is a heavily simplified, stylised, and encoded version of the image projected onto the retina. Indeed, one of many remarkable aspects of the human visual system is how little the information transferred to the visual cortex really resembles that image, and how marvellously adept is the brain at converting that encoded information into the visual experience that we perceive. In fact, one objective of video encoding is to account with increasing sophistication for the ways that the eye pre-processes visual information, in order to provide the smallest data size possible that gives an image that will be perceived similarly to a scene transmitted "raw" and unprocessed, and with much more information.

White is right. (-1, Troll)

shivermitimbers (991055) | more than 7 years ago | (#15803518)

I got a 10-24 on a nigger in south central. Black people cant afford computers, I hope im not offending anyone but they really cant, last time I checked, food stamps were for food.

Re:White is right. (0)

Anonymous Coward | more than 7 years ago | (#15804405)

God, I hate to feed trolls, particularly racist scum trolls, but if you ever bothered to crack a book or do any research on Google for 15 seconds instead of listening to the drivel spewed by metzger and his nazi/aryan nation buddies, you'd know that the vast majority of welfare recipients and food stamp recipients are white, you fucking ignorant cracker shitbag.

And BTW, I'm white. However, unlike you, I have more than two firing synapses.

(PS: no, I didn't forget the rules of capitalization. Capitalizing metzger and/or nazi and/or aryan nation implies a respect they'll never deserve.)

Why do people do this? (1)

zippthorne (748122) | more than 7 years ago | (#15804567)

Here we have a forum for discussing the news, we've got a generally good thing going here, with interesting comments and articles (some duplicated, but so what? we discuss the dupes anyway, and sometimes there's new information to be vetted.) and even some heated flamewars, which are still respectable as the participants are generally genuinely passionate about their positions.

Why, after we've bothered to create this 'community' does someone feel the need to crap all over it like this? What possesses people to notice something nice others have done and think, "I must destroy this?"

Why even say, "I hope i'm not offending anyone" in the same breath that offends pretty much everyone?

but what about pirates with eye patches?! (3, Funny)

cryptonix (163498) | more than 7 years ago | (#15803575)

Arr! they only get 10/half

Re:but what about pirates with eye patches?! (0)

Anonymous Coward | more than 7 years ago | (#15803785)

Pirating is wrong. You deserve to be capped.

Neurons make my head hurt (3, Informative)

BilZ0r (990457) | more than 7 years ago | (#15803609)

The OP doesn't say that a single retinal cell transmits 10 million bits a second, but that the whole eye does. On top of that, while discussion of collition detection is pointless, thinking about the information a neuronal population can encode does have some merits. Although it's relatively pointless (at least now) to compare the eye to an ethernet, it has uses in comparing different neural populations.

The problem is that getting bitrates for neuronal populations is more of an art that a science. The sum total of information passed on by a neuron can not be computed simpley by it's spiking rate. Large numbers of parameters alter the actual chemical I/O relationship of a neuron. Resting membrane potential before spiking, whether it shows short term facilitation/depression etc...

So... Logically (2, Funny)

koterica (981373) | more than 7 years ago | (#15803630)

My eithernet is the same speed as my eyes. My eyes can see my eithernet. My Eyes can see a duck. Therefor, if my eithernet weighs the same as a duck, its a witch!

Conversion factor - Guinea Pigs to LOC (1)

NotQuiteReal (608241) | more than 7 years ago | (#15803653)

So how many guinea pigs would it take to see all the data in the Library of Congress in one second?

Google doesn't seem to have this conversion (yet).

Please convert the conversion for Europe (0)

Anonymous Coward | more than 7 years ago | (#15803669)

When the answer to this has been posted, please convert the metric equivalent for our European Friends.

Re:Conversion factor - Guinea Pigs to LOC (1)

houghi (78078) | more than 7 years ago | (#15803780)

The amount of guinea pics per LOC/s, is that measured in Volkswagons (weight), Empirestatebuildings (lenght) or footballfields (Surface)

Ganglion cells (0)

Anonymous Coward | more than 7 years ago | (#15803686)

How much is a ganglion?

10 to 100 million? (0)

Anonymous Coward | more than 7 years ago | (#15803724)

This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second.

Guess the author hasn't heard of gigabit ethernet yet. (Been living under a rock? It has been around since 1998)...

Re:10 to 100 million? gig and 10 gig (1)

Morty (32057) | more than 7 years ago | (#15804445)

Don't forget 10-gig ethernet [ethermanage.com].

But the point of the article was to provide a way to visualize the (native, uncompressed) bandwidth of the eye, not to draw any comparisons between cutting edge technology and the human eye. Most desktop users are familiar with 10/100 ethernet rather than with gig and 10-gig ethernet, so 10/100 forms a better basis of comparison.

My eyes were transmitting 10 mbits/second... (0)

Anonymous Coward | more than 7 years ago | (#15803740)

My eyes were transmitting 10 mbps until I started reading this, at which point they glazed over and were closer to 56 kbps.

Hmm.... (0)

Anonymous Coward | more than 7 years ago | (#15803749)

So, a person with cataracts is like a dialup connection?

AT&T (2, Funny)

richdun (672214) | more than 7 years ago | (#15803750)

Oh great, now AT&T is going to charge me more to see certain things than others. Stupid eye neutrality.

(let's see how many pick up on the joke here...)

Re:AT&T (0)

Anonymous Coward | more than 7 years ago | (#15803969)

Yes, if AT&T really gets their glory days back we'll all have the same model eyes and have to pay them for usage.

~nirokato

Um yeah, I dunno... (2, Interesting)

DavidD_CA (750156) | more than 7 years ago | (#15803813)

Here's how I look at it... the human eye has a "resolution" far greater than that which any monitor supports, and certainly greater than any streaming video I have ever seen.

Add to that the color depth of the human eye. Granted, not 16 M colors, but still pretty high.

The frame rate of the average human eye is somewhere around 40 fps, I believe. Again, faster than what most streaming videos offer.

Then double all that, 'cause we got two eyes.

I'm pretty sure the "bandwidth" between my eyes and brain is a little faster than even the best ethernet connection.. At least anything that I've seen demonstrated so far.

Re:Um yeah, I dunno... (2, Informative)

zizdodrian (987577) | more than 7 years ago | (#15804104)

The frame rate of an eye isn't governed by the eye itself - given the transfer is analog, the frame rate is governed by what the visual cortex can handle. Which, if you have done any animation, is about 25 frames per second - a speed at which the human visual cortex cannot percieve the individual frames making up an animation.

I doubt the data transfer rate of any nerve is anywhere near the transfer speed of ethernet - I read an article once stating that human nerve tissue could transmit information at about 400 metres per second - something which ethernet stomps all over. The beauty of the brain is that it isn't restricted by raw transfer speed. What governs the speed at which the brain computes and calculates is its massively paralell connectivity. In fact, many have speculated that this connectivity is what conciousness itself arises from - it gives the brain a complexity far beyond the number of cells it actually contains.

The second thing to consider is that the brain is not limited by binary transfer - it can utilise chemicals and hormones, variable voltages, and timings to transfer information, not just on/off.

Why then does it seem that... (0)

Anonymous Coward | more than 7 years ago | (#15803821)

Is ethernet so slow?

But the data isn't "pixels" or anything.... (5, Insightful)

sbaker (47485) | more than 7 years ago | (#15803832)

The numbers presented here are very misleading. You get the impression that your eyes are transferring video images as a bunch of pixels at the relatively slow speed of an Ethernet connection. But that's not true. Video processing starts right there in the retina and steadily changes the data from pixel-like date to edges, lines, shape to recognised objects to high level concepts that are conveniently tagged with memories, emotions and other relevent data.

At what point are we measuring the data? If the data that's actually being measured is something like "My Mom standing next to a table with a vase full of flowers on it" - then having 10 Mbits/sec is a heck of a lot of data. If it's raw video - then it's pathetically little.

We can estimate the bandwidth your eyes could theoretically produce if they were transmitting "raw video". We know that the retina has a resolution of around 5k x 5k "pixels" and we can see motion at around 60Hz and we have more dynamic range than we can display with 12 pixels each for Red, Green and Blue. So at the 'most raw', two eyes would require 5k x 5k x 60Hz x 2 x 12 x 3 bits per second. That's 108 Gbits/sec - which is vastly more than the 10Mbits to 100Mbits this article suggests. You can argue about the details of the numbers I used here - but we're looking at four orders of magnitude - so I have to be a LOT wrong!

So it's pretty certain that what they are measuring in TFA is some kind of condensed or summarized version of the visual data.

That being the case, it's pretty silly to be comparing "My Mom standing next to a table with a vase full of flowers on it" to a 640x480 JPEG file. It's simply not an 'apples and apples' comparison.

But the data isn't "pixels" or anything....Analog. (0)

Anonymous Coward | more than 7 years ago | (#15804031)

Two things you're missing. Fovea and spectral response(1).

(1) If memory serves the eyes use the ratios between the colors, not actual magnitudes.

how many ... (0)

Anonymous Coward | more than 7 years ago | (#15803844)

is one ganglion?

Fun but irrelevant (1)

thelamecamel (561865) | more than 7 years ago | (#15803937)

Clearly we're not even comparing apples and oranges here, we're comparing apples and pianos. But it is amusing nonetheless.

Reminds me of a few years back when Apple advertised their latest computers as being "Faster than light". This tagline was withdrawn a few weeks later under attack from the nerd community, but not before some mac fanboys created the amusing argument that the computer completes a floating point calculation in less time than it takes the light from the monitor to reach your eyes. By that (flawed in several ways) logic, the bottleneck in computation is in the display!

Well then, time for an upgrade. (1)

HoboMaster (639861) | more than 7 years ago | (#15804350)

I've got gigabit ethernet on my computer, so I'm obviously going to have to upgrade to gigabit retinal nerves. Do they sell those on NewEgg?

Umm, A little more info would be nice.. (1)

paynesmanor (982732) | more than 7 years ago | (#15804380)

They failed to mention, that a computer sees a one inch red square as about 158,544 Bits where as the human mind sees it as just one small red square.....

Apples and oranges (1)

LordLucless (582312) | more than 7 years ago | (#15804410)

Computers and the biologically world handle data differently. Computers use digital data. Humans perceive the world in an analog fashion. So we can view a picture at roughly the same speed as you can transfer it across a network. Now change that to text, and I'll bet your calculations are way off; ethernet will transmit text many times faster than the human eye can read it. There's no point to figuring out how many "bits" the human eye can read in a second, because the human eye doesn't read bits. This is just like measuring the size of the internet in terms of Libraries of Congress.

who cares about transfer speed? (2, Funny)

ltwally (313043) | more than 7 years ago | (#15804443)

Who cares about the transfer speed. What I want to know is what kind of ping I'm getting.

I guess no Gigabit Jack. (1)

phxhawke (35260) | more than 7 years ago | (#15804530)

And i was looking forward to getting a gigabit interface installed in my skull once cyberjacks became available too :p

Fp c03k (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#15804670)

IS DYING LIKE THE rapid, I Don't want to I type this. clearly. There

There's a BIG difference actually.... (1)

pandrijeczko (588093) | more than 7 years ago | (#15804748)

...as I've never known an Ethernet network to suffer longer "ping" round-trips and increased packet loss as a result of too much beer...
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...