Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Content-Centric Networking & the Next Internet

timothy posted more than 2 years ago | from the bits-must-still-flow dept.

The Internet 153

waderoush writes "PARC research fellow Van Jacobson argues that the Internet was never designed to carry exabytes of video, voice, and image data to consumers' homes and mobile devices, and that it will never be possible to increase bandwidth fast enough to keep up with demand. In fact, he thinks that the Internet has outgrown its original underpinnings as a network built on physical addresses, and that it's time to put aside TCP/IP and start over with a completely novel approach to naming, storing, and moving data. The fundamental idea behind Jacobson's alternative proposal — Content Centric Networking — is that to retrieve a piece of data, you should only have to care about what you want, not where it's stored. If implemented, the idea might undermine many current business models in the software and digital content industries — while at the same time creating new ones. In other words, it's exactly the kind of revolutionary idea that has remade Silicon Valley at least four times since the 1960s."

Sorry! There are no comments related to the filter you selected.

Magnet links? (5, Insightful)

Hatta (162192) | more than 2 years ago | (#40907177)

Did he just reinvent magnet links?

Re:Magnet links? (1)

Anonymous Coward | more than 2 years ago | (#40907193)

I wanted to say torrents, but you were faster :)

Re:Magnet links? (5, Insightful)

cayenne8 (626475) | more than 2 years ago | (#40908175)

My concern is, whenever I hear about "re-inventing the internet"...is that if we do it, this time around, all the government types will want to have protocols in there to assure no anonymity, tight control...and likely make it difficult for the avg person to hook a computer to the internet of the future, and become a true peer.

The genie is out of the bottle, even still today on current internet setup....I'd not count on the govt types allowing the next one, to have a genie....by force of law.

Re:Magnet links? (4, Informative)

vlm (69642) | more than 2 years ago | (#40907213)

Did he just reinvent magnet links?

Closer to a reinvention of freenet.
Or maybe reinventing mdns
Or maybe reinventing AFS

Its been a pretty popular idea for a couple decades now.

Re:Magnet links? (1)

Anonymous Coward | more than 2 years ago | (#40907783)

The term you were looking for is "The Semantic Web", yet another blue-sky, fluffy bunnies and unicorns view of how we should be more concerned with the content itself than the location or presentation method, but absolutely no functional methodology for doing so.

Re:Magnet links? (-1, Redundant)

Captain Hook (923766) | more than 2 years ago | (#40907215)

Just what I was about to say

Re:Magnet links? (2)

MightyMartian (840721) | more than 2 years ago | (#40907275)

It looks that way, and of course, it raises the obvious question "What transport layers do you propose to move this data around with?"

Re:Magnet links? (2)

u38cg (607297) | more than 2 years ago | (#40907613)

I have two questions: one, how do you expect to overcome the network effect of TCP/IP, and two, how does this prevent the free rider problem? Who pays for Youtube?

Re:Magnet links? (0, Flamebait)

MightyMartian (840721) | more than 2 years ago | (#40907709)

I think the whole thing falls under the "I have a great idea, but I actually don't have the foggiest idea how infrastructure works now, but hey, I need to a BIG SEXY CONTROVERSIAL headline."

Imagine if even a tenth of the fucking morons out there who pontificate on subjects for which they had no real knowledge at all actually did have that knowledge. My God, we'd probably be terraforming Pluto by now!

Re:Magnet links? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#40908129)

Is your actual premise here that Van Jacobson, a major contributor to TCP/IP and inventor of the modern flow control it is based on, somehow doesn't have the foggiest idea how the infrastructure HE HELPED FUCKING INVENT works?

Re:Magnet links? (1)

Anonymous Coward | more than 2 years ago | (#40908911)

I think that's the premise most commenters in this thread are operating under. In this case, the guy saying "look, TCP/IP wasn't designed to handle the environment we have now" also happens to be the guy who said the same thing in the late 1980s AND FIXED IT.

Re:Magnet links? (3, Informative)

Jah-Wren Ryel (80510) | more than 2 years ago | (#40908413)

I think the whole thing falls under the "I have a great idea, but I actually don't have the foggiest idea how infrastructure works now, but hey, I need to a BIG SEXY CONTROVERSIAL headline."

Imagine if even a tenth of the fucking morons out there who pontificate on subjects for which they had no real knowledge at all actually did have that knowledge. My God, we'd probably be terraforming Pluto by now!

The irony is strong in this one.

Anyone pontificating about internet infrastructure who doesn't know Van Jacobson [wikipedia.org] is a fucking moron.

Re:Magnet links? (3, Informative)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#40907369)

" (In Jacobson’s scheme, file names can include encrypted sections that bar users without the proper keys from retrieving them, meaning that security and rights management are built into the address system from the start.)"

It sounds like he made them worse; but otherwise pretty similar to magnet links or the mechanisms something like Freenet uses.

Perhaps more broadly, isn't a substantial subset of the virtues of this scheme already implemented(albeit by an assortment of nasty hacks, not by anything terribly elegant) through caches on the client side and various CDN schemes on the server side? URLs haven't corresponded to locations, rather than to either user expressions of a given wish, or auto-generated requests for specific content, in the majority of cases for a while now(and, on the client side, caching doesn't extend to the entire system, for security reasons if nothing else; but it already covers a lot of common web-resource request scenarios).

Now, in a perfect world, "we have a pile of nasty hacks for that" is an argument for a more elegant solution; but, in practice, it seems to be closer to equivalent to "we already have stuff that mostly works and will be cheaper next year", which can be hard on the adoption of new techniques...

Re:Magnet links? (0)

Anonymous Coward | more than 2 years ago | (#40908335)

So if I give somebody a copy of the filename for accessing the file, I can give him a copy without having to make a copy?
    I bet the iaa's really like that.

Re:Magnet links? (1)

houstonbofh (602064) | more than 2 years ago | (#40908979)

Did you not read the first line of the post you responded to?

In Jacobson’s scheme, file names can include encrypted sections that bar users without the proper keys from retrieving them, meaning that security and rights management are built into the address system from the start.

So, no you can't, and yes they will.

Re:Magnet links? (2, Informative)

Anonymous Coward | more than 2 years ago | (#40907503)

more like CDN servers, except smarter.

There are already mechanisms for this.

What needs to exist is a hybrid approach where the end users are the origin servers, and the CDN notes operate as capacity supernodes on their local ISP, in turn these ISP supernodes talk to each other. If a piece of content needs to "disappear" the end user removes it from their system, and it will tell the supernodes that the content is no longer available, leaving only users who already have it to talk to each other if they still want it. If a piece of content is meant to be long-lived (eg movies, tv shows) then the originator simply has those data files on a dedicated host node.

What happens today is you get torrent/magnet links which get all the bits from everyone, but when people get bored, or disconnected, there goes your seeds. The other side of this is CDN, where not everyone does this (think most blogs, webcomics, and self-hosted podcasts.) Youtube for example has edge nodes at most ISP's, where as Ustream, certainly does not. This gives a preference to Youtube.

So the hybrid approach is to borrow the EDGE server part of the CDN, and make these a type of torrent seed that expires when the originator says so. This keeps mistakes to a minimum. It also acts as a weak DRM, in that you won't know what the origin server is to try and pull it directly, only from the supernode edge. The supernode edges keep from having to saturate expensive connections like transatlantic/transpacific/wireless links.

But this isn't solving the fundamental problem. Capacity and bad caching practices.
Ads... never cache, because they want tracking
PHP... never caches because the content is dynamic
But these are only small parts of bandwidth, but sometimes they make up large pieces of web pages, for example, having FaceBook and G+ widgets can add 2MB per page, of which only a small portion is cached, due to using cache-busting techniques like affixing ?v=123 to the end of the script, or setting cookies

Re:Magnet links? (2)

EdIII (1114411) | more than 2 years ago | (#40907623)

This guy belongs in Star Trek, and I don't say that in a derogatory way.

It's worse than magnet links, because he is proposing that the entire Internet (or most of it) work just like that.

The problem is not the technology, it is the societies trying to implement it. Magnet links sound great in theory, but are progressively (extremely) dangerous in practice. You would have to be crazy to using public peer-to-peer networks at this point with Big Content doing its best to shove Freedom's face into the ground to lock down the Internet.

Public methods right now, even with encryption, are like throwing huge raves with underage drinking and drugs in abundance, and seeing a couple dozen narcs, cops, and private investigators mingling with the people.

We could implement his ideas, but the only safe way to do so would be to create an inherently anonymous infrastructure. Not a trivial task.

....And that might undermine many current business models in the software and digital content industries

Really? Maybe?

These are the same people working World Wide to change laws so that they don't have to adapt. It's pretty clear how they deal with anybody attempting to undermine them in any way.

I love the idea in theory, but it goes against the omnipresent need to control content with an iron fist. Incompatible would be an understatement.

Re:Magnet links? (1)

Urza9814 (883915) | more than 2 years ago | (#40908163)

We could implement his ideas, but the only safe way to do so would be to create an inherently anonymous infrastructure. Not a trivial task.

So...like Freenet then? As someone else has already mentioned, this does (at least to me) sound a LOT like the way Freenet addresses files.

Re:Magnet links? (2)

lgw (121541) | more than 2 years ago | (#40908697)

It seems like Slashdot has a "let's reinvent Freenet" story every week now. Freenet may have issues, but it solves a great many current problems. What it lacks is the network effect - there's not really any content there today, so no one uses it (and vice versa).

Re:Magnet links? (1)

Urza9814 (883915) | more than 2 years ago | (#40908937)

I contributed a bit of code to the 0.5 network several years ago...I've been meaning to go back and see if that's still alive now that I have a stable, good internet connection. Just graduated college; didn't really have a connection I could run it on the whole time there. But last I checked 0.5 (FCON) was still populated, and I still can't quite trust the new network. Last I checked there was still better content on the old one anyway! Though certainly not much of it...nothing like it used to be...

I think that was their biggest problem. They had a decent network, with great content. I still have thousands of pages printed out of...well, I'll call them books, because what else do you call an 800 page (8.5x11 paper, 10pt font!) website? And they were written on, and exclusive to, Freenet (as far as I know; I tried and failed many times to find them elsewhere). Then the devs screwed the 0.7 release up as bad as they possibly could. Tried to force the entire community over to pre-alpha software. Some went, some stayed, and some moved to other networks. As far as I know, it never recovered.

Re:Magnet links? (2)

Njovich (553857) | more than 2 years ago | (#40907851)

No. Next question?

a) it predates magnet (magnet just from 2002, CCN is from late 90's)
b) magnet is a naming/addressing scheme, this is a routing technology. There is a difference, although one can be used with another.

Re:Magnet links? (0)

Anonymous Coward | more than 2 years ago | (#40908109)

The world needs a kind of Universal Library, and efficient infrastructure to support it. The problem of namespaces seems to be the hardest. Think book UDC categories, think DNS names: there is no one uniform namespace to satisfy everyone. Hence the net needs to accommodate multiple administrative domains, document roots, collections, views. To avoid pointlessly hauling redundant data, the whole affair needs to be cache-based, distributed, robust and cryptographically secure. And scale like crazy. Basically, s/squid/repo/ and s/http/git/.

Are there any repository systems or distributed filesystems that might scale to billions of nodes? Is there such a thing as gitfs (git filesystem)?

Re:Magnet links? (2)

Fallingcow (213461) | more than 2 years ago | (#40908819)

Is there such a thing as gitfs (git filesystem)?

# cd /
# sudo git init
# sudo git add .
# sudo git commit -av -m "Git filesystem is a go"

Isn't the internet already meeting demand? (3, Insightful)

hawguy (1600213) | more than 2 years ago | (#40907211)

Why does he say "it will never be possible to increase bandwidth fast enough to keep up with demand"?

When I want to watch streaming video, I fire up Netflix and watch streaming video. When I want to download a large media file, I find it on bittorrent and download it. The only time I've noticed any internet slowdowns, it's been in my ISP's network, and it's just a transient problem that eventually goes away.

Sure, Netflix has to do some extra work to create a content delivery network to deliver the content near to where I am, but it sounds like the internet is largely keeping up with demand.

Aside from the IPv4->IPv6 transition (we've been a year away from running out of IP addresses for years), is there some impending bandwidth crunch that will kill the internet?

Re:Isn't the internet already meeting demand? (2)

Baloroth (2370816) | more than 2 years ago | (#40907447)

He seems to be assuming that demand will continue to grow at current and historical rate. I'd say that isn't a very good assumption: the jump from people using a text-based web to a video/flash/image one was significant, but the demands of each individual user aren't likely to increase much beyond that. Adding more people will increase demand somewhat, but not by an order of magnitude like Youtube, Netflix et al. do, and since people are already watching those just fine, it is hardly an insurmountable issue. Of course, that assumes video bandwidth requirements don't expand to something like streaming 4k, but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.

Re:Isn't the internet already meeting demand? (1)

gman003 (1693318) | more than 2 years ago | (#40907571)

but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.

Uh, what?

I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).

Re:Isn't the internet already meeting demand? (2)

Raumkraut (518382) | more than 2 years ago | (#40907745)

I think it's not that most people can't tell the difference between 720p and 1080p, but that they just don't care.

Re:Isn't the internet already meeting demand? (1)

Loughla (2531696) | more than 2 years ago | (#40907975)

That's exactly it. It's not that they can't see the difference, I'm betting most can. It's just that most people don't give two shits about optimizing their home theater experience. My brother in law gives me crap about my television and how it is set up incorrectly. I have to tell him every time that I just don't care.

(I'm going to make the next part up, but it makes sense) 75% of people use their television to waste time. 20% use it for background noise while they do something else (my group). I'm betting only 5% of people want to completely optimize their home entertainment experience.

It's just like everything else. There are those that know how to do it, and then there are the plebeian masses.

In my opinion, like most things, it all boils down to just another way to be exclusionary.

Re:Isn't the internet already meeting demand? (1)

hawguy (1600213) | more than 2 years ago | (#40907845)

but considering that most people can't tell the difference between 720p and 1080p, I doubt that will ever happen.

Uh, what?

I can see the difference. I can even see the difference between 1080p and 1440p, or 1440p and 2160p. And it's not a slight difference that I could understand people missing. In informal tests, comparing my laptop playing 1080p video to my parent's 720p "HDTV", 100% of those surveyed responded "holy crap that looks better" (margin of error for 95% confidence interval: 9.38%).

You're not "most people" - "most people" haven't even seen 1440p.

And how do you make any sort of fair comparison between a 17" laptop screen and a 32" (or larger?) HDTV? There's no way to fairly compare the two because of the screen size difference.

At normal viewing distances, most people can't see the difference between 720p and 1080p -- you'd need to be within 5 feet of your 40" TV to see the difference. Sure, maybe you have a home theater with a 60" TV and seats 6 feet away, but most people have a TV in the corner of the living room and don't arrange seating for optimal 1080p viewing distance.

http://carltonbale.com/1080p-does-matter/ [carltonbale.com]

Re:Isn't the internet already meeting demand? (1)

gman003 (1693318) | more than 2 years ago | (#40908097)

The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.

Higher resolution does matter. Maybe there is a limit (I haven't seen 4K video on a home-size screen yet), but we're far from reaching it.

And you also have to think about changes in consumption. More and more people aren't lounging on the couch and staring at a massive screen three/four meters away, but sitting at a desk watching video on a smaller screen, or watching on a laptop that's, at worst, on the coffee table.

So now it's not a matter of a screen half a room away - it's a screen a meter away, or less. And I can *definitely* see a difference between my 17" 1920x1080 laptop and my old 15" 1280x800 laptop (basically 720p, but 16:10 instead of 16:9 so it has 80 extra vertical pixels).

Re:Isn't the internet already meeting demand? (1)

hawguy (1600213) | more than 2 years ago | (#40908313)

The comparison was actually with the laptop being next to the TV, so that's about as valid as I can get it. I've even found they can see a difference between a 1280x720 TV and my 1600x900 monitor, and that's much less a difference in physical size AND in resolution.

That's about as invalid as you can get. You're comparing a 100+ dpi laptop screen with a 40 or 50 dpi TV screen. Of course people are going to like the sharper screen of the laptop better.

Re:Isn't the internet already meeting demand? (0)

Anonymous Coward | more than 2 years ago | (#40908423)

I'm cool with ~400p as long as compression is very good, and yes, I have seen 1080p, I could not care less.

Re:Isn't the internet already meeting demand? (3, Interesting)

ShanghaiBill (739463) | more than 2 years ago | (#40907993)

the demands of each individual user aren't likely to increase much beyond that.

I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.

You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.

If the bandwidth is available and affordable, the applications will come.

Re:Isn't the internet already meeting demand? (1)

hawguy (1600213) | more than 2 years ago | (#40908441)

the demands of each individual user aren't likely to increase much beyond that.

I think your thinking is way too constrained. If the bandwidth was available, then people could have immersive 3D working environments, and tele-commuting could be far more common. This would result in much less traffic on the roads and a huge reduction in CO2 emissions and oil imports. This is not science fiction. I have used Cisco's "Virtual Meeting Room" and it is pretty good.

You also need to think about things like "Siri", that send audio back to the server for processing, because there isn't enough horsepower in a cellphone. I could see "smart glasses" of the future sending video back to a server. That will require huge bandwidth.

If the bandwidth is available and affordable, the applications will come.

I work in a large multi-building "campus" (well, more of an office park, we have offices in several buildings). It's a 15 - 20 minute walk from one building to the farthest one (depending on who is doing the walking)

We have practically unlimited bandwidth between buildings (and at least a gigabit to remote offices) yet we still make people trudge between buildings for meetings, and teleconferences with remote sites are 720p (or Skype). So bandwidth isn't constraining us from immersive teleconferencing - we'd probably save a dozen man-hours every day in eliminating the need to walk between buildings for meetings which would easily pay for an immersive teleconference system (10 man hours * 250 days/year * $50/hour = $125K/year in labor savings), yet we don't even use the teleconference system we have now for meetings between buildings.

Boring (4, Insightful)

vlm (69642) | more than 2 years ago | (#40907231)

it will never be possible to increase bandwidth fast enough to keep up with demand.

I've been hearing that since I got on the net in '91. Tell me a new lie.

Its an end time message. "Repent, for the end is near". Yet, stubbornly, the sun always rises tomorrow.

Re:Boring (3, Interesting)

JoeMerchant (803320) | more than 2 years ago | (#40907385)

Two words: Dark fiber [wikipedia.org] . Laying absurd capacity of trunk line is no more expensive than burying an old copper wire bundle.

Re:Boring (1)

TooMuchToDo (882796) | more than 2 years ago | (#40907407)

And fiber lasts a hell of a lot longer than copper. I don't know of any ILEC *not* replacing their copper with fiber when the copper gets to EOL.

Re:Boring (1)

maroberts (15852) | more than 2 years ago | (#40907477)

And fiber lasts a hell of a lot longer than copper. I don't know of any ILEC *not* replacing their copper with fiber when the copper gets to EOL.

Or when some metal thieves can't find enough scrap metal above ground.

Re:Boring (1)

VortexCortex (1117377) | more than 2 years ago | (#40908345)

Or when some metal thieves can't find enough scrap metal above ground.

Hippies love color changing things w/ LEDs -- There's certainly a market for Fiber thieves.

That's what gets modded +5 Insightful these days? (0)

Anonymous Coward | more than 2 years ago | (#40907963)

I've been hearing that since I got on the net in '91. Tell me a new lie.

Its an end time message. "Repent, for the end is near". Yet, stubbornly, the sun always rises tomorrow.

Your well-reasoned multi-year in-depth technical analysis and reams of substantiating data have me convinced that it's all just a "lie" (as you put it).

May I mod you super-genius? I was afraid your post was just going be just some typical uninformed anecdotal horse-manure that provided all the insight of a dead skunk.

Re:Boring (0)

Anonymous Coward | more than 2 years ago | (#40908539)

Until it doesn't, your aproach of just ignore it is just as irational as those who proclaim the end is always near.
Best to hope all is well while also preparing just in case for when it isn't.

Sounds like the principle behind URNs (5, Informative)

QilessQi (2044624) | more than 2 years ago | (#40907233)

See http://en.wikipedia.org/wiki/Uniform_resource_name [wikipedia.org] . This is a very old [and good] idea.

For example: urn:isbn:0451450523 is the URN for The Last Unicorn (1968 book), identified by its [ISBN] book number.

Of course [as the dept. notes] you still need to figure out how to get the bits from place to place, which requires a network of some kind, and protocols built on that network which are not so slavishly tied to one model of data organization that we can't evolve it forward.

Re:Sounds like the principle behind URNs (0)

Anonymous Coward | more than 2 years ago | (#40909263)

Yeah, so what is the URN for your comment?

The internet is useful because you can put any content or service on it you like without begging some central authority to sanction your submissions with some special coding.

I'm not interested in interactive-TV and I don't think I'm alone in that.

A CAS by any other name. (0)

Anonymous Coward | more than 2 years ago | (#40907237)

Arguably, this is just a CAS. Now, of course, Freenet/Entropy have been trying their hand at this in an anonimizing setting, much as Tahoe/LAFS has been trying to do this in an encrypted fashion. A well-funtioning CAS with sufficient FEC, and positioned more towards usability, and less extremely towards anonimity, may be just what we need; a single-hop anonimity with lots of storage (a DHT on short I2P tunnels, say), may make a distributed safe-enough Usenet possible.

Re:A CAS by any other name. (2)

MightyMartian (840721) | more than 2 years ago | (#40907289)

But, of course, it's all going to be running on top of TCP/IP. This isn't a replacement, it's just another widget you run on the tubes.

Re:A CAS by any other name. (2)

QilessQi (2044624) | more than 2 years ago | (#40907693)

Agreed, that's the only realistic approach. Build support for URNs into browsers, get the caching infrastructure in place so that URN'ed data migrates seamlessly to follow demand, and finally get people to migrate from URLs to URNs.

And while we're at it, get rid of the "TLD" concept altogetherm, com vs. org vs. net vs whatever. Names should be doled out to match the jurisdiction of regional naming authorities,with a special "top level". So you might have:

* /i/google internationally-registered name

* /us/tomshardware -- nationally-registered trademark/servicemark in the US (similar for /ca, /fr, /de, etc.)

* /us/gov/fbi -- federal-level US agencies

* /us/ny/empirestatebagels -- businesses registered at the state level only

* /us/ny/gov/dmv -- state-level US agencies (New York Dept of Motor Vehicles)

* /us/ny/nyc/gov/cityhall -- city-level agencies

The wrangling over the specifics would be fun. :-)

look at the source (1)

jjeffries (17675) | more than 2 years ago | (#40907273)

If this had come out of almost anyone else's mouth, I'd be the first to say they were full of it.

But... Van Jacobson [wikipedia.org] !

Re:look at the source (2)

Attila Dimedici (1036002) | more than 2 years ago | (#40908393)

Yes, and if you read that link you discover that he has been pushing this idea since 2006. So, while he has some good credentials to say that the sky is going to fall, he has been saying it for six years now. The sky hasn't fallen and the only sign that it might is the complaints of cellphone vendors, ISPs, and content producers whose profits have not risen as fast as they thought they would and/or would like them to.

Ideas are easy (2)

Ryanrule (1657199) | more than 2 years ago | (#40907327)

Any idiot can have a pile of ideas. The implementation is what matters.

Too bad the idea pays 95%, the implementation 5%

Re:Ideas are easy (1)

JoeMerchant (803320) | more than 2 years ago | (#40907609)

Any idiot can have a pile of ideas. The implementation is what matters.

Too bad the idea pays 95%, the implementation 5%

That's a common misconception. It's the person with the superior legal standing that gets paid 99%, IP only grants superior legal standing if you've also got the lawyers to back it up.

Re:Ideas are easy (2)

real gumby (11516) | more than 2 years ago | (#40908153)

Any idiot can have a pile of ideas. The implementation is what matters.

I like this quote, but personally would not attempt to use it when talking about Van Jacobson [wikipedia.org]

Re:Ideas are easy (1)

VortexCortex (1117377) | more than 2 years ago | (#40908415)

Any idiot can have a pile of ideas. The implementation is what matters.

Too bad the idea pays 95%, the implementation 5%

I run into "Ideas Men" in the indie game dev scene all the time... Most never make a game unless they learn actual coding, art, music -- Some actual skill other than thinking up WiBCIs ("wouldn't it be cool if ___"s). In my experience, it's the implementation that pays, ideas are worth less than a dime a dozen.

Dynamic caching? (4, Interesting)

Urban Garlic (447282) | more than 2 years ago | (#40907337)

So back in the day, we had a thing called the mbone [wikipedia.org] , which was multicast infrastructure which was supposed to help with streaming live content from a single sender to many receivers. It was a bit ahead of its time, I think, streaming video just wasn't that common in the 1990s, and it also really only worked for actually-simultaneous streams, which, when streaming video did become common, wasn't what people were watching.

The contemporary solution is for big content providers to co-locate caches in telco data centers, so while you still send multiple separate streams of unsynchronized, high-demand streaming content, you send them a relatively short distance over relatively fat pipes, except for the last mile, which however only has to carry one copy. For low-demand streaming content, you don't need to cache, it's only a few copies, and the regular internet mostly works. It can fall over when a previously low-demand stream suddenly becomes high-demand, like Sunday night when NASA TV started to get slow, but it mostly works.

TFA (I know, I know...) doesn't address moving data around, but it seems like this is something that a new scheme could offer -- if the co-located caches were populated based purely on demand, rather than on demand plus ownership, then all content would be on the same footing, and it could lead to a better web experience for info consumers. That's a neat idea, but I think we already know how both the telcos and commercial streaming content owners feel about demand-based dynamic copy creation...

Re:Dynamic caching? (0)

Anonymous Coward | more than 2 years ago | (#40908975)

Just out of curiosity ... have you ever looked at the credits for mbone? http://www.lbl.gov/ITSD/MBONE/

From TFA, explaining *how* this would work (1)

emurphy42 (631808) | more than 2 years ago | (#40907341)

Similarly, in a content-centric network, if you want to watch a video, you don’t have to go all the way back to the source, Lunt says. “I only have to go as far as the nearest router that has cached the content, which might be somebody in the neighborhood or somebody near me on an airplane or maybe my husband’s iPad.”

Of course, caching data at different points in the network is exactly what content distribution networks (CDNs) like Akamai do for their high-end corporate clients, so that Internet videos will start playing faster, for example. But in a content-centric world, Lunt says, the whole Internet would be a CDN. “Caching becomes part of the model as opposed to something you have to glue onto the side.”

I suppose it makes sense. The smarter the intermediate nodes are about deciding what to cache (based on popularity, size, speed of original request, who's nearby and what they have cached), the better this would work.

Re:From TFA, explaining *how* this would work (1)

metrometro (1092237) | more than 2 years ago | (#40908035)

How is this different from Bittorrent? Isn't this the same principal, in a more router-oriented way?

Skip to "Profit"! (0)

Anonymous Coward | more than 2 years ago | (#40907345)

it's exactly the kind of revolutionary idea that has remade Silicon Valley at least four times since the 1960s."

Well, that's settled.

SQ (1)

Anne Thwacks (531696) | more than 2 years ago | (#40907351)

So we replace URLs with SQLs?

(The point of SQL is that you say what you want, not where to find it - hence the concept of "NoSQL" just silly)

Re:SQ (0)

Anonymous Coward | more than 2 years ago | (#40907569)

So you don't use the FROM clause?

Re:SQ (1)

VortexCortex (1117377) | more than 2 years ago | (#40908467)

So you don't use the FROM clause?

No, he LIKEs using *.

It doesn't scale. (1)

Anonymous Coward | more than 2 years ago | (#40907359)

A query for information goes where... broadcast.

Think about how many packets that requires... Now think about how many search engines that assumes...

And then think about the returning packet storm.

Now consider millions of queries...

Nope. Not gonna happen.

Never gonna happen, because... (1)

Nutria (679911) | more than 2 years ago | (#40907387)

there's already too much TCP/IP infrastructure bought, paid for and in use.

you should only have to care about what you want," (2)

rickb928 (945187) | more than 2 years ago | (#40907397)

"not where it's stored."

So we should make the Internet into Plan 9?

Re:you should only have to care about what you wan (1)

Jawnn (445279) | more than 2 years ago | (#40908589)

"not where it's stored."

So we should make the Internet into Plan 9?

Your stupid minds. Stupid! Stupid!

You're working for the clampdown (1)

paiute (550198) | more than 2 years ago | (#40907405)

“We can sit here and speculate about where the tollbooths will go, but to me, it’s more about whether there are pockets of money out there ready to address problems that people have now. The tollbooths will go where they need to be.”

I'm pretty sure where the tollbooths will be - embedded in your local ISP. They will be put there by the music and movie industries so that when you in this new future request a tune or a clip by name rather than by IP address you can be either billed or denied.

Too many costs involved (1)

erroneus (253617) | more than 2 years ago | (#40907409)

There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and even then, people will gravitate to the solution with the least amount of change possible.

Re:Too many costs involved (2)

Jawnn (445279) | more than 2 years ago | (#40909003)

There is not only a cost of deploying the new tech, but also the cost of change. That cost of change is REALLY high as the current methods are deeply seeded. IPv6 isn't "there" yet... and the experience has been dizzying for many. Now there's another new approach? It may be better, but people don't want the change. Something catastrophic will have to cause such change and...

Yeah, like Y2K. Oh, wait....
I know! Let's get Apple to build it. Apple people will pay obscene sums for shiny new stuff with Apple logos on it.

This isn't a new idea (1)

Omnifarious (11933) | more than 2 years ago | (#40907421)

But it's good that someone who was involved in the early Internet realizes that it's a good one.

And no, it doesn't mean throwing TCP/IP away.

But really, Slashdotting should be impossible. To me, the fact that it is possible indicates a fundamental problem with the current structure of the Internet. If you can come up with someone other than using content-addressing that solves the Slashdotting problem for everybody (even someone serving up content from a dialup) then it doesn't really solve the problem.

Sounds like.. (1)

wbr1 (2538558) | more than 2 years ago | (#40907453)

Bittorrent and other p2p protocols. Even if -all- content wete distributed this way, you would still need an underlying network, link, and transport mechanism. The tcp/ip serves that very well, then hopefully you have no hotspots of traffic or failre becaise of the distributed nature of the content. Another interesting facet is that if all content is truly distributed and redunt with no single point of storage, master copy, or decryption, there is no way to EVER remove content completely.

Something like Freenet maybe? (1)

cpghost (719344) | more than 2 years ago | (#40907463)

Let's consider Freenet [freenetproject.org] . Don't they store and retrieve data based on some cryptographic keys? Of course, data is distributed across all participants, and communications still piggy back on top of IP. But that's what I'd call content-centric networking. The content isn't located by location, but by its nature (hash/key/...).

But I *DO* care where my content comes from! (4, Insightful)

jmac880n (659699) | more than 2 years ago | (#40907513)

There is a huge chunk of the Internet that cares very much where the content came from:
  • Who exactly is asking me to transfer money out of my account?
  • Did this patch that I downloaded come from a reputable server? Or will it subvert my system?
  • Is this news story from a reputable source?

And the list goes on....

Re:But I *DO* care where my content comes from! (4, Insightful)

Hatta (162192) | more than 2 years ago | (#40907687)

Who exactly is asking me to transfer money out of my account?
        Did this patch that I downloaded come from a reputable server? Or will it subvert my system?
        Is this news story from a reputable source?

None of these depend on the location of the data, only the identity of the author. If you can verify the integrity of the data, where you get it is irrelevant.

Re:But I *DO* care where my content comes from! (2)

Chemisor (97276) | more than 2 years ago | (#40908247)

Except that the location of the data is the primary way of verifying the identity of the author. How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer? I go to www.companyx.com and get the patch from there. Sure, there's DNS spoofing, MITM attacks, etc., but in general going to the authorized location is a pretty reliable method of identity verification. With this content-centric network, there is no way to reliably get the keys to verify the integrity of the data. After all, anybody can claim to be CompanyX, provide the fake keys and fake malware-riddled patches. Accountability is required for security, and currently network location is the simplest way to implement accountability.

Re:But I *DO* care where my content comes from! (1)

Hatta (162192) | more than 2 years ago | (#40908709)

Except that the location of the data is the primary way of verifying the identity of the author.

Only for historical reasons, not technical, and it's always been a bad way of identifying the author.

How am I supposed to know that the game patch I have just downloaded came from CompanyX, rather than from some malware spammer?

Cryptographic signatures.

Re:But I *DO* care where my content comes from! (1)

lgw (121541) | more than 2 years ago | (#40909407)

How do you today validate the identity of any host? Certificates. How would you validate the authentcity of any content retrieved by a hash? The same certificates (used to digitally sign the data). Moving to signed data would make phishing attacks far more difficult (though the certificate system itself has real problems, those problems exists today).

Re:But I *DO* care where my content comes from! (2)

omnichad (1198475) | more than 2 years ago | (#40908253)

And if integrity is based on hash/signature, then it suddenly becomes relevant if computing catches up and can generate a collision. And then you have to upgrade the entire Internet at once to fix it.

Re:But I *DO* care where my content comes from! (0)

Anonymous Coward | more than 2 years ago | (#40909155)

I don't think you really know how big the namespace is for modern hashes. We're talking hash functions like Skein, which has 256, 512, and 1024 bit block sizes. I dare you to find a collision in a 1024 bit hash that has passed an international competition like SHA-3.

Re:But I *DO* care where my content comes from! (2)

jg (16880) | more than 2 years ago | (#40908617)

*All* content is signed in CCNx by the publisher.

You can get a packet from your worst enemy, and it's ok. The path it took to get to you doesn't matter. If you need privacy, you encrypt the packets at the time of signing.

So, what we need .... (1)

PPH (736903) | more than 2 years ago | (#40907527)

... is some infrastructure that we tell what we want and it tells us where it is. Or better yet, fetches it for us. Already done:

The Pirate Bay/BitTorrent.

Nope ... but close (2)

oneiros27 (46144) | more than 2 years ago | (#40907547)

Magnet links only use the hash, so there's a possibility of hash collisions. He's proposing an identifier + resolver scheme ... which again, has been done many, many times already.

Eg, ARK [wikipedia.org] or OpenURL [wikipedia.org]

Or, we get to the larger architecture of storing & moving these files, such as the various Data Grid [wikipedia.org] implementations. (which may also allow you to run reduction before transfer, depending on the exact infrastructure used).

DNS? (0)

Anonymous Coward | more than 2 years ago | (#40907595)

Didn't really read TFA, but this is what DNS is for. I don't care /where/ kernel.org is, or even if it's in the same place every time I access it.

"Anything you can do, I can do meta!" (1)

Anonymous Coward | more than 2 years ago | (#40907605)

Van, Sally Floyd and Lixia Zhang have been talking about this for a while; how much of this is tied back to the adaptive web caching project from the late 90s would give one a sense of how long the idea has been kicking around. One of the neat things that fell out of that project was routing and forwarding on decomposed URNs or URLs... well, there was a paper on that that came out of the adaptive web caching project in IEEE INFOCOMM 2000.

At the risk of sounding snarky, but, Snap! That's the basic idea behind content-centric networking! And that basic idea is patented.

But here's the problem: You still need a network to transport packets. That was the big win in Internet engineering: the ability to create a level of indirection to hide the nastiness of bridging across different media. Sometimes this indirection layer worked well (cf. Ethernet), sometimes it was really clunky and nasty (cf. ATM). And you still need to choose a transport style, connectionless or connection-oriented. And you still need... feel free to add more to the list.

Van's proposal doesn't invent a new internet. It's a new indirection layer and possibly a replacement transport layer.

CCN is not $other_technology (2)

Njovich (553857) | more than 2 years ago | (#40907663)

Any time someone talks about Content Centric networking or routing, there are always a bunch of people saying that it's basically the same as distributed hash tables, multicast, a cache, etc.

However, it may use such technologies, but it isn't the same.

Content Centric is all about having distributed publish/subscribe, usually on a lower network layer.

The content part in the name means that there is being looked at the content itself for routing, not some explicit addressing. For instance, to give a very simple example you can send out a message [type=weather; location=london; temperature=21], then anyone subscribing to {location==london && temperature>15} will receive this message.

The network is typically decentralized, and using this kind of method can give a number of interesting efficiency benefits.

This is currently mostly being used in some business middleware; ad hoc networking stuff and some grid solutions. None of those particularly large.

The real problems with widespread use of this technique are the following:

* It's unnecessary: IPv6 is completely necessary, somewhat doable in terms of upgrading, and almost nobody is using it even now. This is someone suggesting a whole new infrastructure for large parts of the internet. The fact is, this would possibly be more efficient than many things that are being done now, but in reality nobody cares about it. Facebook and youtube (ok Google) would rather just pay for the hardware and bandwidth than give up control.

* Security is still unclear, it's easy to do some hand-waving about PKI, but it's hard to come with a practical solution that works for many.

Re:CCN is not $other_technology (3, Informative)

w_dragon (1802458) | more than 2 years ago | (#40907841)

There are a couple other little issues:

You need to be able to find things somehow. This requires either some set of central servers, which somewhat defeats the purpose, or a method of broadcast communication that isn't blocked by your ISP. There's a good reason your ISP blocks UDP broadcast and multicast packets - on a large network broadcast leads to exponential packet growth.

For most of us the most limited part of the internet infrastructure is the link from the last router to our house. Picking up my youtube cat videos from my neighbour rather than from a cache server on my ISP's backbone may seem like a good idea, but in reality you're switching traffic from a high-capacity link between my street's router and my ISP, to a low capacity link between my neighbour and our router.

If you're going to cache things on my computer you're going to be using my hardware. That hardware isn't free, and neither are the bits you want to use my internet connection to send. How am I going to be compensated?

Re:CCN is not $other_technology (1)

Njovich (553857) | more than 2 years ago | (#40908009)

This requires either some set of central servers, which somewhat defeats the purpose, or a method of broadcast communication that isn't blocked by your ISP.

No central servers are needed, and you don't need broadcast either really (although both are used by some solutions). However, you may need or want brokers/routers at local points, and they may need bigger caches than you would currently have. That can be a problem yes.

(IP level) broadcast is not really needed, as the scheme already implements some kind of multi-cast. I don't think ISP's will block this kind of thing, plain old TCP would suffice just fine as underlying layer.

I would really suggest to read some of the papers on the technology. You are right that there are more problems to the technology though.

Old future. (1)

hendrikboom (1001110) | more than 2 years ago | (#40907665)

you should only have to care about what you want, not where it's stored.

Isn't that what Google is for?

Not a new idea, or a useful one (3, Interesting)

Animats (122034) | more than 2 years ago | (#40907747)

This has been proposed before. It's already obsolete.

The Uniform Resource Name [wikipedia.org] idea was supposed to do this. So was the "Semantic Web". In practice, there are many edge caching systems already, Akamai being the biggest provider. Most networking congestion problems today are at the edges, where they should be, not at the core. Bulk bandwidth is cheap.

The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.

As for running out of bandwidth, we're well on our way to enough capacity to stream HDTV to everybody on the planet simultaneously. Beyond that, it's hard to usefully use more bandwidth. Wireless spectrum space is a problem, but caching won't help there.

The sheer amount of infrastructure that's been deployed merely so that people can watch TV over the Internet is awe-inspiring. Arguably it could have been done more efficiently, but if it had been, it would have been worse. Various schemes were proposed by the cable TV industry over the last two decades, most of which were ways to do pay-per-view at lower cost to the cable company. With those schemes, the only content you could watch was sold by the cable company. We're lucky to have escaped that fate.

Re:Not a new idea, or a useful one (1)

lgw (121541) | more than 2 years ago | (#40909465)

The concept is obsolete because so much content is now "personalized". You can't cache a Facebook page or a Google search result. Every serve of the same URL produces different output. Video can be cached or multicast only if the source of the video doesn't object. Many video content sources would consider it a copyright violation. Especially if it breaks ad personalization.

All of those examples are aggregates of data that could be cached (and often are, in practice, just farther upstream than might be ideal in some cases).

Re:Not a new idea, or a useful one (1)

lannocc (568669) | more than 2 years ago | (#40909497)

If properly designed, something like a Facebook page actually is cacheable. Once an entry is made the entry itself remains static unless there is an edit. The page is simply a feed of resources that are all may be cached individually. Imagine it's an XML document with many xlinks to other resources, optionally also embedded in the original request. This is how I would do it.

we see this with filesystems (1)

YesIAmAScript (886271) | more than 2 years ago | (#40907793)

And it has the same issues. 15 years ago everyone said that we'd move past using file to store stuff and just go for the stuff we want. Microsft had WinFS for example (part of Longhorn).

But then the question comes where do you actually store the stuff?

The real change came not by eliminating using files to store stuff, but by changing how we retrieve stuff.

And this is the same way. Changing how you locate stuff on the internet is not going to remove the need for TCP/IP. You're still going to have to contact a machine to get the data and it'll have to send it back to you and the internet will have to route it between the two.

And not to put down Van Jacobson, but we're already well along the path. I remember when URLs first started appearing in ads, some day in the future, we'll look back and remember the days of URLs in ads as quaint.

Why go through the trouble of creating a URL to and even a short URL (http://bit.ly/itsmbmam) to the sampler for My Brother, My Brother and Me when if you search for "mbmam sampler" the sampler is the first result? Some day we'll stop even bothering. At least it seems like it to me.

How typical: blame the network. (0)

Anonymous Coward | more than 2 years ago | (#40907955)

You have to care where it is stored. It isn't TCP/IP that is holding you back - it is physics.

Where you get/store your content from where you are and how to get there is no different in model on how a someone has to find a path to get groceries, gasoline or any other resource that requires some sort of addressing and path to get there. Whether it is addressing for storage protocols (Fibre channel or other disk tech SATA etc) or MAC addresses IP is an addressing tech - changing it will not fix an oversubscription of data across a fixed infrastructure.

The issue of bandwidth is more of an issue of physical infrastructure technologies than it is an issue of protocols used in those technologies.

Until all of the data you ever need is always with you where ever you are - you still need to care about where you are, where what you want is and how to get there.

So ... bittorrent (0)

Anonymous Coward | more than 2 years ago | (#40908285)

It's just peer to peer networking. Which you can do on top of TCP/IP, and you want to do that because the "who" is frequently more important than the "what."

Anyway, the problem with this is that the "who" that the content starts out with originally is afraid to trust it to anyone but a limited set of trusted sources. The solution will end up being large media providers with servers close to most of their customers.

Paging Ted Nelson (1)

Megane (129182) | more than 2 years ago | (#40908455)

The fundamental idea behind Jacobson's alternative proposal — Content Centric Networking — is that to retrieve a piece of data, you should only have to care about what you want, not where it's stored.

So he wants to re-invent Xanadu? [wikipedia.org]

It will never get implemented... (0)

Anonymous Coward | more than 2 years ago | (#40908491)

Because it is not possible to censor something that exists everywhere and nowhere!

Simple isn't it?

What? (1)

TonyAldo (2702885) | more than 2 years ago | (#40908581)

"PARC research fellow Van Jacobson argues that the Internet was never designed to carry exabytes of video, voice, and image data to consumers' homes and mobile devices, and that it will never be possible to increase bandwidth fast enough to keep up with demand" The internet was never designed to carry exabytes? Who is this guy kidding? It's not the "internets" fault or how it was designed. Blame the ISPs that provide the terrible bandwidth. Google fiber seems to be the answer and the image other ISPs need to follow. Greed is what powers todays slow bitrate not the "internet". The reason "it will never be possible to increase bandwidth" is because the ISPs refuse to.

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#40909267)

This is the guy who fixed TCP/IP flow control when the Internet started to undergo congestion collapse in the late 1980s. (See http://ee.lbl.gov/papers/congavoid.pdf [lbl.gov] ). I submit to you that he knows a HELL OF A LOT MORE about what the Internet can and cannot do than some guy who works as an account manager at a bank and puts up crap pr0n on his website (assuming you're the same as @TonyAldo and as the owner of tonyaldo.com).

Provenance is more important on the Internet (0)

Anonymous Coward | more than 2 years ago | (#40909075)

Provenance is more important on the Internet than most think - it is one of those I.myths, like being anonymous. Well... that goes for first class Internet citizens, if you're just a sharecop (ie Apple chattels) then I guess provenance doesn't matter - at least it isn't your problem.

orbital content (1)

Eponymous Hero (2090636) | more than 2 years ago | (#40909179)

sounds a lot like what a list apart has been calling "orbital content" since at least april '11: http://www.alistapart.com/articles/orbital-content/ [alistapart.com]

Our transformed relationship with content is one in which individual users are the gravitational center and content floats in orbit around them. This “orbital content,” built up by the user, has the following two characteristics:

Liberated: The content was either created by you or has been distilled and associated with you so it is both pure and personal.
Open: You collected it so you control it. There are no middlemen apps in the way. When an application wants to offer you some cool service, it now requests access to the API of you instead of the various APIs of your entourage. This is what makes it so useful. It can be shared with countless apps and flow seamlessly between contexts.

The result is a user-controlled collection of content that is free (as in speech), distilled, open, personal, and—most importantly—useful. You do the work to assemble a collection of content from disparate sources, and apps do the work to make those collections useful. These orbital collections will push users to be more self-reliant and applications to be more innovative.

What an amazing concept... in fact... (1)

ilsaloving (1534307) | more than 2 years ago | (#40909213)

In fact it sounds identical to what CORBA promised. In fact, CORBA will take the world by storm! It will... um...

*headscratch* Hmm....

Freenet (0)

Anonymous Coward | more than 2 years ago | (#40909249)

You know, this sounds too much like the way freenet works. It's not a "new" idea, at least not to the internet, but I'm sure we could benefit from that. You'd still need the bandwidth to move the data from source to other places where you would store and serve as local hubs for people to download faster from, but I admit a single transfer between hubs and then using local infrastructure would be nice...

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?