Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NSF Tags $30M For Game-Changing Internet Research

timothy posted more than 4 years ago | from the does-moving-the-goalposts-count? dept.

The Internet 119

coondoggie writes "So you want to build a better Internet? The National Science Foundation today said it would spread $30 million over 2-4 projects that radically transform the Internet 'through new security, reliability and collaborative applications. The NSF said its Future Internet Architectures (FIA) program wants: "Technological innovations and the requirements of emerging and yet to be discovered applications, the Internet of the future is likely to be different from that of today. Proposals should not focus on making the existing Internet better through incremental changes, but rather should focus on designing comprehensive architectures that can meet the challenges and opportunities of the 21st century."'"

Sorry! There are no comments related to the filter you selected.

Likely to be different? (4, Insightful)

Chris Burke (6130) | more than 4 years ago | (#30942698)

So, the internet of the future isn't going to be a general-purpose protocol-agnostic world-wide data network for sharing and communication of information?

Uh, can I opt-out of the future?

Re:Likely to be different? (2, Interesting)

FooAtWFU (699187) | more than 4 years ago | (#30942786)

I predict the next big thing for the Internet will need to wait until Google rolls out its version of a communications security infrastructure, issuing people certificates (why not? they know enough about you already) and helping them with public-key cryptography, ultimately leading to an email system free of spam.

Some decade.

Re:Likely to be different? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30943228)

That'd be a shitty system. Just wait until some spammers steal your private key, and send out billions of spam emails as if from you. You won't be able to yell, "Disregard! I suck dicks!" fast enough. And even if you do, people will still think you did it, since the system is so "secure".

Re:Likely to be different? (-1)

Anonymous Coward | more than 4 years ago | (#30943320)

Key revocation FTW.

More hot air. (2)

Cryacin (657549) | more than 4 years ago | (#30943394)

It is very unlikely that there will be a radical change in the Internet. Too many businesses, governments and people rely on current standards, that are going to be disruptive and expensive to change.

Don't believe me? Look at your power socket. Not many countries change their standards, and it requires quite a bit of expense to make it happen. Hence why manufacturers and consumers wind up looking stupid when bringing an American appliance to a European power socket. *SCHMOKING!!!*

Re:More hot air. (1)

Anonymous Coward | more than 4 years ago | (#30944664)

I agree. I don't doubt that some useful tech will come out of this initiative but to expect it to change radically - highly unlikely.
You have only to look at how we got to where we are - viruses, trojans, and malware are still a big worry, 15 years after Windows 95.
Software and the most common operating systems still need weekly or monthly patching, are still vulnerable to same old exploits,
buffer overruns, yadda yadda.

Change is hard and, so far, we haven't been very good at making it, short of a catastrophe.

Re:More hot air. (1)

tqk (413719) | more than 4 years ago | (#30944946)

You have only to look at how we got to where we are - viruses, trojans, and malware are still a big worry, 15 years after Windows 95.

Yeah, and anyone who didn't see it coming in '95 had it coming to them. I've run Linux/FLOSS since '93. "Linux on the desktop this year?!?" Why's everyone taking so long? FFS!

Software and the most common operating systems still need weekly or monthly patching, are still vulnerable to same old exploits, buffer overruns, yadda yadda.

All free, no registration required [*buntu, use sudo]:

su -c 'vi /etc/apt/sources.list' # .... debian.mirror.rafal.ca is blisteringly fast :-) Thanks rafal.

su -c 'aptitude update && aptitude safe-upgrade'

aptitude search blah

aptitude show blah

su -c 'aptitude install blah'

Beats the crap out of "app store"s. Have fun. :-)

Re:More hot air. (0)

Anonymous Coward | more than 4 years ago | (#30945024)

Change is slow. Radical change is possible but only if you look at it in terms of decades or even a full century.

So yes, you're right in that "radical" change is purely ridiculous. But if you're saying that change won't happen, that's most probably false. The change will just be so gradual, people won't even realize it.

Re:Likely to be different? (0)

Anonymous Coward | more than 4 years ago | (#30943670)

That doesn't change the fact that billions of spam emails have been sent within the trusted zone, and that some poor fucker has had his reputation utterly destroyed.

Re:Likely to be different? (1)

Sponge Bath (413667) | more than 4 years ago | (#30943148)

Uh, can I opt-out of the future?

Sooner or later we all opt out.

Re:Likely to be different? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30943278)

I gave yo mama a pearl necklace, and I'm not talkin about what oysters sometimes produce.

Re:Likely to be different? (2, Funny)

Chris Burke (6130) | more than 4 years ago | (#30943338)

and I'm not talkin about what oysters sometimes produce.

You're not talking about sperm? Okay now I'm just confused.

Re:Likely to be different? (3, Insightful)

icebike (68054) | more than 4 years ago | (#30943470)

If only the future had opted into the past.

Quote from TFA:

From the Network World article: The NSF says it won't make the same mistake today as was made when the Internet was invented, with security bolted on to the Internet architecture after-the-fact instead of being designed in from the beginning.

"We are not going to fund any proposals that don't have security expertise on their teams because we think security is so important," says Darleen Fisher, program director

And this really is the crux of the problem isn't it?

Rampant SPAM (95% of all email), deep packet inspection, attacks, bot nets, the list goes on. Almost all the abuses we suffer daily on the internet are due to the security-as-an-afterthought model.

There will be those (there always are) who insist that this is nothing more than a government take over and the installation ob yet more back doors. There is nothing that can be done to appease that viewpoint, even open standards and open source will not suffice.

But I am not prepared to believe we can not improve upon what was done 40 years ago given the number of minds and the level of technology we have to apply to the problem today.

We defend the status quo because we know it, not because it is optimal, not because it is even close to being fully functional, and certainly not because it is fair.

Deal with political problems in the political arena. But in the mean time, lets fix our tools.

Re:Likely to be different? (3, Insightful)

Chris Burke (6130) | more than 4 years ago | (#30943746)

Rampant SPAM (95% of all email), deep packet inspection, attacks, bot nets, the list goes on. Almost all the abuses we suffer daily on the internet are due to the security-as-an-afterthought model.

Not really.

Bot nets exist because you can never stop people from installing software no matter how scary your warning dialogues about untrusted sources are (and in fact throwing up too many is counter-productive).

Spam and DOS attacks are because you can't prevent the bot nets.

Most of the real security problems are at the OS/application level. Not the underlying internet.

Re:Likely to be different? (1, Insightful)

ClosedSource (238333) | more than 4 years ago | (#30944014)

"Most of the real security problems are at the OS/application level. Not the underlying internet."

Sure. The Internet design avoids any security problems by officially assigning the problem to somebody else.

Re:Likely to be different? (2, Insightful)

Chris Burke (6130) | more than 4 years ago | (#30944128)

No, it's because there aren't many security problems to solve at the IP layer or below.

You can't stop botnets or spam by putting security into the internet itself. Not without breaking what the internet *is*.

Re:Likely to be different? (1)

ClosedSource (238333) | more than 4 years ago | (#30944162)

"No, it's because there aren't many security problems to solve at the IP layer or below."

Who says a new design has to use IP?

"Not without breaking what the internet *is*."

Remember, at the time it was designed, there was no "is".

Re:Likely to be different? (2, Insightful)

Chris Burke (6130) | more than 4 years ago | (#30944222)

Who says a new design has to use IP?

So... you're planning on introducing a bunch of security problems below the transport layer?

You'll still have to solve all the problems again at the application layer!

Remember, at the time it was designed, there was no "is".

Yeah instead there was a "designed to be", and it was designed to be what I described in my first post. You can break that if you want. I like it.

Re:Likely to be different? (3, Informative)

raddan (519638) | more than 4 years ago | (#30944564)

Actually, rethinking global addressing schemes is on the table for many next-gen Internet projects I've spoken to researchers about. The reason is that router-table growth is not adequately handled in IPv6, nor is the meaning of an IP address very clear in the current Internet. These are major issues. Have a look at Jerome Saltzer's work on naming and addressing. If you want the short version, have a look here [ietf.org] .

Re:Likely to be different? (1)

Chris Burke (6130) | more than 4 years ago | (#30945180)

I'm totally on board with rethinking addressing. My point wasn't that you couldn't use something other than IP... it was that the kind of security problems we're talking about solving aren't really problems at the link/internet level. They're mostly application level.

Oh, come on. (1)

Estanislao Martnez (203477) | more than 4 years ago | (#30945154)

No, it's because there aren't many security problems to solve at the IP layer or below.

Um, I don't how what you have in mind by "many," but the mutual authentication problems addressed by IPsec [wikipedia.org] are pretty damn important.

You can't stop botnets or spam by putting security into the internet itself. Not without breaking what the internet *is*.

Haven't given much thought to botnets, but a big part of the spam problem is simply the fact that our email protocols are built so that the whole message contents are always immediately pushed to every recipient. An improvement on that would be a model where the sender can only push a notification, and must hold the content in an outbox server for the recipients to pull on demand.

Re:Oh, come on. (1)

totally bogus dude (1040246) | more than 4 years ago | (#30946798)

I don't think that would be an improvement. Spammers would just use botnets or compromised hosts or ISPs/datacentres that don't care to send and host their spam emails, just like they currently use them to send mail. So nothing would change there. Spam filtering would be harder, since you can't analyse the content of the message to determine if it's spam or ham. And if you retrieve every message automatically so you can filter it, then you've not really achieved anything at all; the only possible gain from having spammers have to host their spam messages is that they might be taken offline by the time you come to read your email. If you don't automatically retrieve messages, then now sender's will always know exactly when you read their message, instead of the opt-in receipt currently in use.

Also, what if the sender's server is down when I go to read an email? What if it's just on a slow or congested link? What if I'm on a congested link? At least with regular "push" email, once it arrives it's local and fast to access and I can read my emails without an active internet connection. So pretty much everyone is simply going to configure their server or client to download every message automatically as soon as the notification is received, so it's just added an extra back-connection for no particular reason. (Kind of like FTP...)

Re:Likely to be different? (1)

GaryOlson (737642) | more than 4 years ago | (#30944216)

And the US Interstate highway system "avoids" any security problems by officially assigning the problem to the states. Yet, on the US highways people feel mostly secure. On the other extreme, highways in Iraq tend to be not so secure -- much like the Internet.

Expecting a new Internet built secure by design attempts to transfer the security aspect from the social arena to the technical arena. Although some "door, ignition, and tire locks" can be designed as basic security of the new Internet components, security is always relative the what society defines as effectively secure.

Re:Likely to be different? (2, Interesting)

misnohmer (1636461) | more than 4 years ago | (#30944114)

You may like this [http://www.nebunet.com] - social networking of any IP connected devices, not just people. The idea is to turn the internet into many independent secure networks as easy to use as your favorite social networking site. It's not something google would like to see - self organizing internet based on context - but most people would. What do you think?

Re:Likely to be different? (1)

icebike (68054) | more than 4 years ago | (#30944146)

I'm not convinces social networking sites are the security model I would like to see for the rest of the internet.

Just sayin.....

Re:Likely to be different? (1)

misnohmer (1636461) | more than 4 years ago | (#30944570)

Good point. Social networking sites are very much lacking in security today. But what if the internet was fragmented into many such context based networks with built-in access control and security? It increases hacking effort significantly as hackers now have to hack each network individually. It also allows people to expose their devices on such much smaller (then the internet) networks, reducing their exposure to the elements. Searching also becomes easier since you can search only relevant context networks. No?

Re:Likely to be different? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945724)

Besides, for security the LAST thing you want to be identified by is your connections. If a new internet is to have any chance of being adopted, it must of necessity include the ability use the internet while perserving anonymity and privacy.

Re:Likely to be different? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945732)

"Preserving", not "perserving" or "perversing". Damn typos.

Re:Likely to be different? (2, Insightful)

DragonWriter (970822) | more than 4 years ago | (#30944266)

But I am not prepared to believe we can not improve upon what was done 40 years ago given the number of minds and the level of technology we have to apply to the problem today.

We can, quite easily (on the technical front), but it doesn't take any stunning new transformative technology, just the kind of incrementalism that the effort here disdains. Its not like the problems of SPAM and other similar problems haven't already spawned technologies designed from the ground up as complete "super-replacements" (that is, replacements with broader general applicability than the replaced system) that are also designed to avoid the problems with the replaced systems. For email and the problem of SPAM, AMQP (a generalized messaging protocol which subsumes, but goes far beyond, the function of email) is designed from the ground up to avoid the possibility of recipients being spammed.

The problems with replacing existing technologies with more secure ones is more of a social problem than a technical one. Putting money into technical research that specifically requires that it go only into things that are radically different than what exists now -- and thus a bigger social problem to get people to transition to -- don't help at all.

Re:Likely to be different? (1)

icebike (68054) | more than 4 years ago | (#30944380)

Putting money into technical research that specifically requires that it go only into things that are radically different than what exists now -- and thus a bigger social problem to get people to transition to -- don't help at all.

So, funding the development of the internet, while ignoring the perfectly good post, office was a total bust then???

Re:Likely to be different? (1)

antirelic (1030688) | more than 4 years ago | (#30944316)

I thought IPv6 was suppose to offer the solution? What ever happened to "internet2"? I remember maybe a year or so ago NSF dumping money for research into something identical to the above.

Why does NSF (a political entity) have to dole out money to solve a problem that doesnt really exist. What I mean by that, is that there are many companies out there coming up with ideas (both good and bad) at dealing with bandwidth issues. The good ideas will make a fortune for whomever figures it out. If some slash dot lurker figures out a better way to network and decides to develop and implement the solution, then they are going to get rich. I can recall a search company that started out small, with no government money, that today has gone a long way to "solve" the obscurity issue pondered about in the early 90's.

Re:Likely to be different? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945760)

You are comparing apples and oranges. Your vaunted search company makes vast use of the already-existing network. That's like comparing a directory service to the telephone.

that won't happen (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30943596)

I think. I can't see China accepting anypart of a future internet they don't have significant control of. We could see the rise of a highly distributed internet There would still be global networks, but under different control and not interlinked. What I would like to see is internet 2.0 being a slow transition over to ipv6 address space. What I'd really like to see is people setting up their own private network - using whatever protocol they want - communities. Decentralization would be healthy I think.

Re:Likely to be different? (1)

lucif3r (1391761) | more than 4 years ago | (#30945050)

This seems so wrong headed "Proposals should not focus on making the existing Internet better through incremental changes, but rather should focus on designing comprehensive architectures that can meet the challenges and opportunities of the 21st century."

Right, because radical changes are so often effective and quickly adopted... go, go, government waste.

I HAVE AN IDEA... (0, Offtopic)

Monkeedude1212 (1560403) | more than 4 years ago | (#30942730)

Let's restructure everything to be "IPinfinite"...

We will never, ever, ever, EVER, run out of Address space.

Re:I HAVE AN IDEA... (1)

oldhack (1037484) | more than 4 years ago | (#30942932)

That's too easy. Variable-size address. Where's my 30mil? I'll implement a prototype for half the price.

Re:I HAVE AN IDEA... (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#30943056)

You're right. Lets also make it backwards compatable! I don't want my Windows 3.1 to lose its connection!

Re:I HAVE AN IDEA... (0)

Anonymous Coward | more than 4 years ago | (#30943570)

You're right. Lets also make it backwards compatable! I don't want my Windows 3.1 to lose its connection!

Holy shit, holy fucking shit! A Slashdotter actually correctly wrote "lose" instead of "loose." I never thought I'd see the day. Could it be, that someone writes in a way that is NOT intended to demonstrate how stupid/careless he is? Could it POSSIBLY be that someone actually proofreads? Holy fucking shit!

Seriously, most people are such fucking sheep. They have no idea. Overnight, everyone suddenly started making these basic grammatical errors, the sort that should have been corrected in elementary school. They're such fucking sheep, that other people making those errors caused them to make them too. They have absolutely no self-direction of any sort, it's just monkey-see, monkey-do, so even grammatical errors are subject to trends now.

Re:I HAVE AN IDEA... (0)

Anonymous Coward | more than 4 years ago | (#30943618)

I poofread what I write everytime.

Ye gods... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30942794)

While I'm certain that the major innovations they are targeting will come in time there are some fairly basic changes to how the internet works today that can have major benefits. These are mostly in the way that identity is managed on the web and 'net.

The technologies exist today to make the web twice as easy and half as painful to use, including the end of passwords as we know them. When will these real changes that will help foster the next generation of technologies come to fruition?

Step 1: (4, Insightful)

swanzilla (1458281) | more than 4 years ago | (#30942796)

Abolish Flash, immediately.

Step 2: Add a session layer (1)

jonaskoelker (922170) | more than 4 years ago | (#30946662)

Step 2: add a Session Layer.

Why? First, a motivating example.

At my university, when I move from the room where I give TA sessions to my own office, I disconnect from a wifi AP and reconnect to another. This causes programs to see themselves as disconnected from the internet.

That's fine for web browsing (just hit reload if you were browsing the web while your laptop was in your back pack) or downloading with wget (resume with -c). But it sucks if you were streaming audio with mplayer: now you have to restart the stream and seek to where you were, which you might not know exactly.

It'd be much better if mplayer knew to hang back for a while and then restart downloading where it left off. Similarly for ssh: it disconnects, so I have to reconnect.

What would a session layer do for me? It would let me save some local state I could give to the other end of the connection to say "This is where we were, let's pick things up from there", following a disconnect.

The idea would be for the applications to try reconnecting and resuming the session when they see they're on the net again, even if on a different IP address.

Would that be solved with IP mobility (as, say, in IPv6)? Somewhat, but not completely. A session layer would, I think, allow me to move my network connection between different machines: instead of disconnecting from IRC-on-my-desktop and reconnecting on my laptop, producing a part+join, I could just move the session over (assuming the application supported it)---but not move all traffic over to my laptop.

Some applications support half-baked sessions (range requests for HTTP lets wget continue with -c, for instance).

What I want is for almost all applications to support suspending and resuming the connection. I want communication to be not between hosts or interfaces, but between conceptual entities---e.g. "Jonas Köker" and "Some Audio Streaming Service"; but I'll settle for "Jonas' wget" and "Service's httpd"; this communication should transcend changes in the lower layer(s): if I need to change IP address or reopen a socket, why should (not does, why should) I care? Why should the endpoint? Why can't we manage a bit of state that lets us pick up from where we left when we resume a connection?

See also http://en.wikipedia.org/wiki/Session_Layer [wikipedia.org] .

NFS said: (-1)

Anonymous Coward | more than 4 years ago | (#30942810)

Internet, I am disappoint.

Time to disolve NSF? (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30942814)

There is much better use for 30M such as spending it on education, which is broken rather than Internet which isn't not so broken.

Re:Time to disolve NSF? (4, Funny)

goose-incarnated (1145029) | more than 4 years ago | (#30942902)

There is much better use for 30M such as spending it on education, which is broken rather than Internet which isn't not so broken.

Yup ... you're seriously making a great case there, trust me on this ;-)

Re:Time to disolve NSF? (0)

Anonymous Coward | more than 4 years ago | (#30943656)

Ignoring the fact that education is horribly broken at the moment, 30M is a ludicrously tiny amount compared to the truckload of spending in other places.

Hell, that's a handful of spare change to some of the USA's idiotic expenditures!

Re:Time to disolve NSF? (4, Informative)

Truth is life (1184975) | more than 4 years ago | (#30943820)

There is much better use for 30M such as spending it on education, which is broken rather than Internet which isn't not so broken.

That's not the point of the NSF. Besides, as this link http://nsf.gov/pubs/2010/nsf10001/toc.jsp [nsf.gov] to their FY 2009 report shows, they already spend almost a billion dollars a year on education. Or over 30 times the value of this award. I really don't think you can claim that canceling this award and giving the money to the DoEdu (or even shifting it to the education side of NSF) would be better value for the money.

Re:Time to disolve NSF? (2, Funny)

TimHunter (174406) | more than 4 years ago | (#30945150)

Don't waste it on education. $30M is much better spent fighting hunger. And working for world peace. Spend the $30M fighting hunger and working for world peace. And manned space exploration. Spend the $30M fighting hunger, working for world peace, and manned space exploration.

I'll come in again.

Re:Time to disolve NSF? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945786)

Beyond a certain point -- which has already been exceeded in most of the U.S. -- there is a negative correlation between money spent and the quality of education.

Spend it on buying NASA a clue, or something else equally worthwhile.

I can solve this easy (2, Insightful)

antifoidulus (807088) | more than 4 years ago | (#30942822)

through new security, reliability and collaborative applications.

No need to create new tech to do that, I can increase the security, reliability, and the collaborative potential of the internet easily, just get rid of Windows. There, can I have my $30 mil now?

Re:I can solve this easy (1)

bakawolf (1362361) | more than 4 years ago | (#30942968)

no, they're re-purpose it for a combination of usability and configurability.

Re:I can solve this easy (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#30943044)

I don't know if you are an arrogant Mac user or a Pompous Linux Guru, but you have to realize that the vulnerabilties in Windows do not make the FUNDAMENTAL vulnerabilities in other systems go away.

If Microsoft folded up shop tomorrow and the only Machine you could get at a big store was a Mac, one of two things would happen. Either
A) More and more viruses would pop up for Macintoshes. And yes, there are some, so don't try and deny that. Or
B) Macs, being locked into a very specific hardware set would have to adopt a more open policy (opening more holes) or It would cause some serious stagnation in the producers of other computer parts - completely ruining all competition and slowing all progress.

And if everyone were using Linux, it would be just the same as before. Everyone would be Sudo'ing this and that and hackers will exploit any setup the user uses to make their PC Easier.

You need someone like Microsoft to be the scapegoat for the idiot masses so that more secure systems can even exist.

Re:I can solve this easy (2, Insightful)

antifoidulus (807088) | more than 4 years ago | (#30943340)

When the Chinese hackers decided to go after Google, which machines did they go after, the Linux servers or the Microsoft Windows clients? Answer, despite the fact that the data they were after lives on the servers, they went after the clients because Microsoft "security" is a joke and serious, easy to exploit holes go unpatched for months on end from Redmond. Not to mention the sheer amount of shit they REQUIRE you to be an admin for, the total lack of opacity in their processes etc. If Microsoft disappeared tomorrow, there still would be security exploits, but significantly less than there are now.

Not to mention the numbers speak for themselves, despite having over 90% of the pc market share, Microsoft has less than half that [pcworld.com] and that share is continuing to decline. Why is that? Because cracking Windows is pretty trivial, and if a company has important data they want to protect, they sure as hell aren't going to go Windows.

Microsoft has never paid anything but lip service to security, and I suspect they never will. Oh well, the sooner the world is rid of Windows Server, the better.

Re:I can solve this easy (2, Insightful)

causality (777677) | more than 4 years ago | (#30943370)

I don't know if you are an arrogant Mac user or a Pompous Linux Guru, but you have to realize that the vulnerabilties in Windows do not make the FUNDAMENTAL vulnerabilities in other systems go away.

If Microsoft folded up shop tomorrow and the only Machine you could get at a big store was a Mac, one of two things would happen. Either A) More and more viruses would pop up for Macintoshes. And yes, there are some, so don't try and deny that. Or B) Macs, being locked into a very specific hardware set would have to adopt a more open policy (opening more holes) or It would cause some serious stagnation in the producers of other computer parts - completely ruining all competition and slowing all progress.

And if everyone were using Linux, it would be just the same as before. Everyone would be Sudo'ing this and that and hackers will exploit any setup the user uses to make their PC Easier.

You need someone like Microsoft to be the scapegoat for the idiot masses so that more secure systems can even exist.

Microsoft is just catering to a need. The "need" is that people want to use technologies and networks without understanding what they are using or at least learning about their correct use. So long as people think this is a great idea and refuse to invest a little time learning about the tools they use every day, the security situation is not going to improve. I'm actually fine with this; people who fall for phishing attempts and the like are merely getting out of the system what they were willing to put into it. It concerns me that this is not a technological problem but technical solutions are being proposed for it. Those can only have the effect of restricting the free and open network that is available today for anyone who wants to learn how to use it.

Re:I can solve this easy (0)

Anonymous Coward | more than 4 years ago | (#30945794)

you forget one thing...
Microsoft was extremely late to the whole networking party. Unix machines were sending packets over the arpanet before BillyG got ahold of an altair. Microsoft ignored TCP/IP up until 15 years ago. Despite all the things they have said, it has been shown time and time again that they do not do complete rewrites of their codebase. There are bugs in win7 that have been poking around since the first version of NT. 9x series security was a total joke and NT's is mediocre. face it, the core that is windows has so many holes in it that nothing short of going to Singularity/Midori will solve. Even then, certain questionable design practices will still bite them in the ass (like the browser and file explorer sharing the same rendering engine)

OSX and linux don't have nearly as many problems, not because of userbase or being open source or anything like that. They are superior because they are based on a mature design paradigm that put security and the network first. This explains why in spite of being the number one operating system in the server, embedded, and supercomputing markets, linux has had at most 10 viruses ever made for it, and none of those were zero day. This compared to windows that can't seem to go at least three weeks without a massive zero day virus attack.

microsoft can do better, they have shown this with their research division. they just have to pull the trigger on deprecating win32. once its pure .net, they will have incredible security. maybe use virtual machine software they have been sitting on to pull an apple. also gives them the advantage to run any software on any processor architecture. But they won't, and thats why windows will always be a buggy piece of crap

Re:I can solve this easy (1)

raddan (519638) | more than 4 years ago | (#30944590)

Security has to be addressed both at the OS level and at the network architecture level. We can't continue to rely on the good behavior of all of the actors on the Internet. Even if you make all operating systems secure and well-behaved, what's to stop someone from writing something new?

Getting rid of Windows eliminates an entire class of problems, of which network security is NOT one. When I'm bored at work and decide to portscan the spammers, guess which port I see open. Hint: SSH.

All-out pipe-dream grant (1)

oldhack (1037484) | more than 4 years ago | (#30942830)

"Technological innovations and the requirements of emerging and yet to be discovered applications, the Internet of the future is likely to be different from that of today. Proposals should not focus on making the existing Internet better through incremental changes, but rather should focus on designing comprehensive architectures that can meet the challenges and opportunities of the 21st century."

Essentially, it's a "Stimulus" plan for network research sector.

This is nice (2, Interesting)

dedazo (737510) | more than 4 years ago | (#30942836)

But honestly, with the US so far behind other industrialized nations in broadband quality and penetration, shouldn't this be promoted by Japan or South Korea? Who cares about the super duper better intertubes if you're still stuck at the 1.2mbps downstream dictated by the local suckage cable mini-monopoly?

I'm all for this type of thing, I really am. But fix the basement before you go adding a new chimney.

Re:This is nice (1, Funny)

Anonymous Coward | more than 4 years ago | (#30943026)

Here's how I read your comment:

"Wahhh, wahhh, do what I want with your money! Wahh!"

Re:This is nice (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945838)

if you think the cable monopolies are "mini", you haven't been paying attention.

I'll take cash or check... (2, Funny)

cosm (1072588) | more than 4 years ago | (#30942930)

Security:
Fourier Transform FT( Internet ) - Security through obscurity, it won't make any sense!

Reliability:
Mobius Transform MT( Internet) - You always end up where you start, SynAckishly

Collaboration:
Wavelet Transform WT ( Internet) - Make it a design ideology, Google's got it ;)

Re:I'll take cash or check... (1)

oldhack (1037484) | more than 4 years ago | (#30942990)

It's been a long time since I read anything on signal processing, and I know jack shit about wavelet stuff, but why wouldn't FT make any sense? Transform back out to time space and it's good to go.

Re:I'll take cash or check... (1)

cosm (1072588) | more than 4 years ago | (#30943200)

My mental reference of FT'ing things that shouldn't be FT'ed: XKCD 26 [xkcd.com]

Best to keep doing patches (3, Insightful)

Darkness404 (1287218) | more than 4 years ago | (#30942974)

Its a lot better for the world as a whole if we keep doing small improvements to the internet rather than a total overhaul. For one, it will create a -huge- amount of waste in a short period of time, for another, it will not be entirely global, corporations, governments, etc will aim to reduce global communication, global trade and such. If we do create a "new internet" it should be decentralized as much as possible, nearly untraceable and fully global (no Geolocation-IP address based discrimination), however, governments do not like us to exercise any freedoms they have on paper and corporations want to maximize profits, so this will never happen.

Re:Best to keep doing patches (1)

steelfood (895457) | more than 4 years ago | (#30943110)

Be careful. Internet version 3 may come with DRM built right into the standards.

Re:Best to keep doing patches (1, Informative)

Anonymous Coward | more than 4 years ago | (#30944350)

Posting anonymously as I am working on one of the projects.

"Its a lot better for the world as a whole if we keep doing small improvements to the internet rather than a total overhaul"

Speaking for my project only, small improvements IS the entire point; leverage today's infrastructure to achieve better $performance_metrics. Sure, we want applications and devices to have security/trust/nachos, but leverage as much existing hardware and protocols as possible. For sure, the one thing we do not want is a "separate" internet. Those that want segmentation can simply refer to the fragmented social networking apps to see why this is a bad idea .Don't like social networking? Fine, drive from hawaii to CA by car (only).

Huh? (0)

Anonymous Coward | more than 4 years ago | (#30943022)

Not Safe For ...what?

Hey, I'll take a couple hundred grand for this (2, Interesting)

goose-incarnated (1145029) | more than 4 years ago | (#30943186)

I doubt that this is open to non-Americans, so I'll just post my idea here instead:

Make every endpoint (home 'puter) have no less than two different ISP connections. Then every home computer can also be a router. This does mean that every single packet has to be encrypted (a solved problem, methinks), and that every single endpoint is properly uniquely identified.

Advantages are numerous - encryption is required for it to work at all, consumers have redundancy (not only for their own net connection, but throughout the entire path as well), ISP's don't have to provide $X Mb/s connection, they can provide $X/2 Mb/s and the computer can load-balance while routing. Last advantage is that torrent-like downloads can take place without the need for special p2p software.

Disadvantages do, of course, include the fact that every consumer doubles their internet bill and that a govt is unlikely to fund a global TOR rollout :-)

Re:Hey, I'll take a couple hundred grand for this (1)

some_guy_88 (1306769) | more than 4 years ago | (#30943790)

I think it'd be cool if everyone connected their houses together using their existing standard networking equipment (wireless or otherwise). Every house would be a router. You'd only need normal ISPs for connecting one town to the next. Might be a bit slow though.

Re:Hey, I'll take a couple hundred grand for this (1)

GaryOlson (737642) | more than 4 years ago | (#30944448)

Are you perhaps suggesting replacing the current hub and spoke cabling architecture with a modified full mesh architecture where peers can route around the hub?

Re:Hey, I'll take a couple hundred grand for this (1)

Areyoukiddingme (1289470) | more than 4 years ago | (#30944676)

Yes. Yes he is. And so have I, in posts now two years old. Most of suburban America is within gigabit ethernet run length of at least 2 other houses, and many can reach 4 other houses. Those that are farther away than that can use repeaters. Five port gigabit ethernet switches are cheap (under $60), and firmware for those switches that can generate and maintain multiple simultaneous spanning trees is available from research labs.

I have a cable modem. I already share a local loop with some fraction of my neighbors. I'd much rather share a local mesh that we OWN with every one of my neighbors when that local mesh could be 500 times faster at upload speeds and 100 times faster at download speeds. We could band together and form a co-op to buy backbone connectivity for our mesh.

And... it will never happen. I let that dream die a long time ago. It requires my neighbors to first understand what the hell it is I'm talking about and second to actually cooperate. It makes me tired even to think about the effort required to accomplish that. Not to mention Charter and AT&T would fight tooth and nail to kill the idea, up to and including bribery and lies to local government.

It was a nice dream. Too bad it's dead.

Re:Hey, I'll take a couple hundred grand for this (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945892)

Here's the problem: a 5-port Gigabit switch has a maximum throughtput of 1 GB. Not the 5GB (or more accurately 4GB) it would take for everybody to get their bandwidth.

Re:Hey, I'll take a couple hundred grand for this (0)

Anonymous Coward | more than 4 years ago | (#30944768)

There would be decided benefits to this design change.

Hub and Spoke was implemented for ease of maintenance in the datacenter. (The hub has a central patch panel, and reconnections and route patches can occur very quickly for the enterprise.)

Outside of a datacenter, though, it becomes a liability for information; Say your ISP (A), has direct lines to ISPs B, C, and D. While your recipient (which is just across town) is on ISP E, and has direct connections to D, F, and G. This means that in order for your packet to reach across town, it has to be routed through ISP D's network hub, which could be in a totally different town, connected via a dedicated fiber line. This means it is contending both upstream and downstream with other traffic that has been crammed into these trunks, and that severe bottlenecking could occur.

The modified mesh topology would determine if there was sufficient quality of service over the "non-preferred" routing system (Modified mesh connections, instead of the main ISP trunks), then route messages accordingly. This would reduce the amount of traffic going through the ADE trunks, which would make operating costs of those ISPs much lower by reducing the congestion of those trunks.

It would also provide an ad-hoc infrastructure that could at least partially solve the "Rural broadband" problem. The residents themselves could establish peer network nodes. This would be greatly eased if the FCC was less anal about the power levels at which individuals could broadcast (Current power limits on public broadcast space, such as "whitespace broadcasting" is down in the microwatt range, which is insufficient to travel the 1/4 to 2 mile distances between rural subscribers, and thus not suitable to sustain a cell. The 5ghz band for 802.11a would be quite capable of this dirty work, if the broadcast limits were relaxed. This is because wireless A is able to penetrate buildings and trees, and therefore does not need line of sight, like B/G does.), and if ISPs would remove clauses from the EULAs about providing connectivity to other users. These nodes could act as cells, providing connectivity to other node cells, until one or more of them is able to get access to the internet, and from there to a preferred trunk.

Since potentially "Many" cells near suburbs could get highspeed data access, there would be multiple failsafe exit points of the mesh network.

Implementing the cells would be easy, as many rural subscribers already have wind generators, and simply placing the transponder on top of the windmill tower would create a sizeable cell potential for the target distance. The wired network infrastructure could be stored in the battery house for the windmill.

It could also be subsidised via allowing it to be used as a failsafe network for extreme out of network cellphone operation. (SMS only, due to bandwidth limitations, and the potential for abuse by cellular users. SMS would be implemented over IP, then forwarded to the nearest ISP, and from there to the cell carrier defined in the encapsulated SMS datagram. Since most SMS messages could be encoded into a single IP datagram, this would be easy to implement, and out of order issues would be minimal. Time delay in forwarding the SMS message through the ad-hoc mesh would be less show-stopping than trying to do this with voice data, which requires very low latency. As-is, such an approach would have to be constantly attempting to determine best routes through the mesh, and dynamically promoting and demoting cells as primary routers in order to maintain a reasonable QoS, and would need some kind of modified DHCP+Nat, or use of native IPv6 in order to address all the new router nodes in transit anyway. It should never be used for major production, but only as a failsafe dynamic network, and as an information metrics gathering tool. (One could determine the best places to run fiber line, based on the routing preferences of the adhoc network over time.))

The same could be said for the wired or bridged wifi based networks that would appear inside city limits, as the same metrics data could be used to determine the most efficient routes between heavy centers of activity, and would be of great value to urban planners.

Re:Hey, I'll take a couple hundred grand for this (0)

Anonymous Coward | more than 4 years ago | (#30943922)

You are clueless. This doesn't make sense for the average consumer or the telecom industry. Stuff that titillates wannabe network engineers, nothing more.

Some questions (1)

jonaskoelker (922170) | more than 4 years ago | (#30946726)

ISP's don't have to provide $X Mb/s connection, they can provide $X/2 Mb/s [...] every consumer doubles their internet bill

Why? Isn't there just as much infrastructure to maintain, and just as many bytes to transfer? Wouldn't the cost of that stay constant? Or does 100% of your bill go to keeping customer records and (oh wait, you may be on to something) customer service? If the custserv load increases, I might believe you. Otherwise, what's the reason for doubling the bill?

encryption is required for it to work at all

Erm, why?

consumers have redundancy (not only for their own net connection, but throughout the entire path as well)

What does the multi-homed-ness of endpoints have to do with redundancy in the core / on the backbone?

Last advantage is that torrent-like downloads can take place without the need for special p2p software.

What do you consider "torrent-like"? Sure, you can make multiple parallel requests, but you can do that while single-homed today. Don't you need some code to merge the responses into a coherent file or byte sequence*? Don't you need some code to decide which peers to send to? Don't you want that code to make smart decisions, i.e. send to the ones that send most to you, in order to entice them to send more to you? (If everybody employs this strategy, the bandwidth allocation converges to a market equilibrium.)

(* Hey, I'm getting a whacky idea: that's exactly what TCP does, by receiving beyond the window. Maybe we could... hmm... nah...)

a govt is unlikely to fund a global TOR rollout :-)

How did TOR enter the picture?

Don't take what I say as criticism: your idea may be wonderful and sense-making. I just don't quite seem to understand why it is (if it is). Please help me understand.

NSF means ... (1)

PPH (736903) | more than 4 years ago | (#30943202)

... Not Sufficient Funds. I'll consider that $30 mill a down payment. You'll have my solution upon delivery of the balance.

A series of tubes! (2, Funny)

gestalt_n_pepper (991155) | more than 4 years ago | (#30943204)

Oh wait, somebody already took that one.

throw away DNS (0)

Anonymous Coward | more than 4 years ago | (#30943334)

hey let's throw away DNS, and we can have the domainname battle all over again ;)

Adoption (4, Insightful)

cosm (1072588) | more than 4 years ago | (#30943406)

Wishful thinking. What makes them believe anybody will adopt? The general theme I gather from the Slashdot community is that the preexisting design aesthetic (if you can even call it that) for the internet is actually pretty solid, its just the implementation that people & organizations botch. The IPv6 bandwagon isn't about to collapse from all its passengers now, is it?

The folks who generally engineered the internet had decent enough foresight from a technical standpoint. It is the BIG Telco's and all their 'peering', 'filtering', 'throttling', and combined unwillingness to invest in new infrastructure that puts the choke hold on our tubes (pun intended). Do you expect the major Tier 1's to drop billions of $$$ to adopt, 'cuz I sure as hell don't.

It's nice to honor those who came before us but (1)

ClosedSource (238333) | more than 4 years ago | (#30944118)

it's pretty clear that those who engineered the Arpanet/Internet assumed that its users would be highly trustworthy.

It was a reasonable assumption at the time, much like the assumption that DOS/Windows wouldn't need heavy security because PCs weren't going to be connected to strangers' computers.

Re:It's nice to honor those who came before us but (1)

martin-boundary (547041) | more than 4 years ago | (#30944516)

Don't confuse trustworthyness with the end-to-end principle [wikipedia.org] . The original vision was for a highly reliable dumb network, with smart terminals at the ends. That leaves the responsibility for trust squarely where it belongs, namely at the users feet.

Re:It's nice to honor those who came before us but (1)

ClosedSource (238333) | more than 4 years ago | (#30945442)

I suspect that security was the trade-off that the end-to-end principle was all too willing to make.

Close... oh soooooo close, but no cigar. (1)

ka9dgx (72702) | more than 4 years ago | (#30945616)

The responsibility for security should be at the ends, not the middle. The middle is where you insert censorship and the canonical "Eve" who taps everyone's email and other communications.

Blaming the victim (user) isn't any smarter. They just want to use a tool. If it requires perfect knowledge of the state of the entire universe to know if it's safe to open a given file, then you can't blame them for failing to be G-d.

Capability Based Security can give a system to an end user which eliminates the need for perfect guessing and/or luck. The system only gives the rights to a program that you specify, no more, ever. It's the model which is seeing service in smartphones, etc... in which every app runs in a sandbox. The difference is that it's tighter than that even, the granularity goes to the point where you can specify access to a file, and there is absolutely NO way to see anything else. You don't ever have to trust code outside of the OS kernel.

This can be done, for less than $30,000,000. Now, can someone help me write the grant application? Does anyone want to do it?

Will any organization *not* botch implementation? (1)

jonaskoelker (922170) | more than 4 years ago | (#30946780)

its just the implementation that people & organizations botch.

That reminds me of a general notion: in economy, in theory, some things are best left to government. Say, building infrastructure, running a police force, internalizing negative externalities through pollution regulation, etc..

But if no political system can be made to exist where the government actually does well what it (in theory) is the right "person" to do, is it really a good idea to leave it to government? If the market does worse than the theoretical best solution but the government in practice does even worse even though in theory it should do better, why leave it to the government?

(You can flip it around and say "Market Failure" if you want a pro-government story to explain this.)

Having a monopoly on assigning internet names and/or numbers might mean that in the current political and economic reality, any organization that handles the monopoly will botch it and screw the users.

If that is the case (I'm not sure that it is, but if), maybe a network architecture that doesn't have the monopoly will produce a better internet, even though in theory it should be worse?

This is not a definitive answer. It's a question. One I think people designing internetworking infrastructure should ask themselves.

why spend all that cash? (2)

mt1955 (698912) | more than 4 years ago | (#30943592)

Wouldn't it be cheaper just to call Al Gore?

Re:why spend all that cash? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945936)

Call him what?

Changing games (0)

Anonymous Coward | more than 4 years ago | (#30943600)

Look, I know some people are passionately and quite stubbornly devoted to their games, but $30M to convince someone to change from, say, Halo to Gears of War? That seems a bit excessive...

Anyone else read "NSF" as... (0)

Anonymous Coward | more than 4 years ago | (#30943694)

...Need For Speed? I thought this was going to be about a new groundbreaking online racer!

IPv6 + multicast (1)

olivierva (728829) | more than 4 years ago | (#30943784)

Getting IPv6 and multicasting work would massively stimulate the creation of new tech/apps, but I assume these two are not considered 'technical innovations' anymore because most of us already know, for at least 10 years, this needs to happen

FIA: That acronym is already taken (0)

Anonymous Coward | more than 4 years ago | (#30944022)

The Fédération Internationale de l'Automobile (F1) k thx

Self Signed Certs (1)

ObsessiveMathsFreak (773371) | more than 4 years ago | (#30944246)

Tell you what? Give me $15 million and I'll give the other $15 million to Mozilla to get them to stop ripping on self signed certs. Then we can finally have (far more) secure web browsing than we already have, and all with existing technology.

Whose security are we talking about? (2, Interesting)

CopaceticOpus (965603) | more than 4 years ago | (#30944430)

Increased security, built into the fabric of the internet, sounds like a goal everyone can support. However, to build security into the network, you must necessarily build in stronger methods of identifying the users of the system. This will make anonymity much more difficult, and will greatly increase the government's ability to track the online activities of individuals.

There are some situations where that power would be used for good, but do we really want to allow the government more power and more ability to monitor the population? I am sure that they are drooling over the possibility. The recent abuses of the FBI should give everyone a fair idea of how responsibly this power would be used.

I'm not sure what a "game-changing" technology would look like, anyhow. The internet is fundamentally about shuffling bits of data between endpoints. That much is not going to change, and the rest is just implementation. What are we going to try, sending twos?

Re:Whose security are we talking about? (1)

Jane Q. Public (1010737) | more than 4 years ago | (#30945978)

I think that's part of the point. In order for a "new internet" to be adopted by the tech community today, regardless of how much "security" it offered, it would have to include the ability to use the Internet privately and anonymously. I really do not see it being accepted any other way.

Re:Whose security are we talking about? (1)

phantomfive (622387) | more than 4 years ago | (#30946890)

I'm not sure what a "game-changing" technology would look like, anyhow. The internet is fundamentally about shuffling bits of data between endpoints. That much is not going to change, and the rest is just implementation. What are we going to try, sending twos?

I was thinking something similar, but then I realized in 1990 someone could have said the same thing. Then the world wide web came along, and while it wasn't exactly a change in the underlying basics of routing, it completely changed the way the internet appears from the surface. So I wouldn't be surprised if another similar change came along that completely changed how the internet looks again, though I have no idea what that change would be.

Auto-filtering (1)

Blakey Rat (99501) | more than 4 years ago | (#30945002)

I've invented an extension to DNS that automatically prevents accidental access to any web page that includes the term "game changing." I think it deserves a couple mil at least.

Improving the tubes... (0)

Anonymous Coward | more than 4 years ago | (#30945142)

Has far more to do with scrapping all the technology that the internet replaces than improving it's inherent functionality.

Cable boxes? Phones? Pants? The wave of the past.

They have great timing (1)

wurp (51446) | more than 4 years ago | (#30945590)

We've already started working on the next version of the internet:
* making server based applications (like email and web apps) serverless (and free to host)
* making storage more accessible from anywhere
* making network apps scalable by default
* providing single sign-on across the whole net
* providing infrastructure to authenticate all messages

Read more at http://persistnet.pbworks.com/ [pbworks.com] . Unfortunately a significant amount of the work is still in our staging area being prepped to be made public.

New validated email scheme (1)

GodfatherofSoul (174979) | more than 4 years ago | (#30946690)

Please??? As ingenious as some encryption algorithms are, I can't believe we haven't solved this one yet.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?