×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Anti-Technology Technologies?

Soulskill posted more than 5 years ago | from the tubes-versus-tubes dept.

Communications 146

shanen writes "A story from the NYTimes about metering internet traffic caught my eye. I thought the exchange of information over the Internet was supposed to be a good thing? Couldn't we use technology more constructively? For example, if there is too much network traffic for video and radio channels, why don't we offset with the increased use of P2P technologies like BitTorrent? Why don't we use wireless networks to reduce the traffic on the wired infrastructure? Such technologies often have highly desirable properties. For example, BitTorrent is excellent for rapidly increasing the availability of popular files while automatically balancing the network traffic, since the faster and closer connections will automatically wind up being favored. Instead, we have an increasing trend for anti-technology technologies and twisted narrow economic solutions such as those discussed in the NYTimes article, and attempts to restrict the disruptive communications technologies. You may remember how FM radio was delayed for years; part of the security requirements of a major company includes anti-P2P software, as well as locking down the wireless communications extremely tightly — but there are still gaps for the bad guys, while the main victims are the legitimate users of these technologies. Can you think of other examples? Do you have constructive solutions?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

146 comments

Its good to be back (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#23799543)

I had noticed a severe dearth of pseudo-intellectual incoherent rants on slashdot the past couple of days, good to know they aren't going soft in that department. Actually having to read about real news instead of dribble from morons was scary!

Control (4, Informative)

Xiph (723935) | more than 5 years ago | (#23799551)

It's a matter of balancing control against efficiency.

Understanding the workings of an entire swarm is is not easy.

With a swarm it is harder to differentiate for "elite" customers who pay to get that extra bandwidth.
Where you are in the swarm will matter just as much as which connection you're paying for.

Bittorrent is the problem :( (5, Interesting)

Anonymous Coward | more than 5 years ago | (#23799631)

Bittorrent is a major part of the problem because it attempts to utalise 100% of the available bandwidth (and the client end). If every user used bittorrent, then the ISPs would have to supply 1:1 bandwidth (instead of overselling as they do at the moment), thus dramaticly forcing the price up for every user.

Re:Bittorrent is the problem :( (2, Insightful)

Anonymous Coward | more than 5 years ago | (#23799803)

... thus dramaticly forcing the price up for every user.
Thats what they say...

Bittorrent is a major part of the problem because it attempts to utalise 100% of the available bandwidth
Bittorrent is a protocol, what you describe is the default configuration of popular Bittorrent clients that can be configured otherwise.

Re:Bittorrent is the problem :( (4, Insightful)

Dan541 (1032000) | more than 5 years ago | (#23799891)

Consumers using what they paid for!!!!!

Oh no we can't have that.

~Dan

Re:Bittorrent is the problem :( (1)

homer_s (799572) | more than 5 years ago | (#23800939)

Customers having to pay for what they use!!! OMG!!!

Re:Bittorrent is the problem :( (2, Insightful)

rocketPack (1255456) | more than 5 years ago | (#23801553)

ISPs promising what they can actually deliver!!! ZOMG!!11one!11oneoneone!!!!11!111one


In the corporate world, this shit doesn't fly. You get less for more money, but it's guaranteed. What if ISPs just sold us connections that they could actually deliver, instead of jacking up the numbers to look good?

This issue can be argued from many angles, and I think it's pointless to throw mud back and forth -- the article asked for CONSTRUCTIVE suggestions, and I see neither of you have provided one. Let's stop rehashing solutions we already know don't work, and get back to the point please.

Re:Bittorrent is the problem :( (5, Insightful)

Anpheus (908711) | more than 5 years ago | (#23800031)

How? There isn't enough content to run BitTorrent maxing out my connection 24/7. I'd have to buy a new hard drive every day to do that. Can you propose to me any way of actually utilizing my connection 24/7 with BitTorrent, maintaining a seed ratio and not clogging my hard disks (because I'd need to buy a NAS in short order.)

I think the result would be significantly lower than 100%. For one thing, 100% of people will never use any one technology. For another, even those who do can't possible saturate their connection 100% of the time unless they're on dialup. I have fifteen megabit cable with a realized throughput of around 13000 kbps to the continental US, and can easily get 1.6-1.7 mega-bytes- per second on downloads. Even at just 1 MB/sec, I have to buy another 80GB hard disk a day to fill this line. Heck, I'd run out of content I'd even want to download.

Re:Bittorrent is the problem :( (1)

Wildclaw (15718) | more than 5 years ago | (#23800441)

It is like you see on the other slashdot ISP/internet discussions.

One person suddenly says, "If you actually use your X mbit/s download, how much do you think it would cost the ISP?". That is of course ridicioulus, because who actually use their download 24/7?

There are three things that limit most heavy users (exception for compulsive hoarders)

* Consumption Time - You only have X amount free time.
* Quality Material - There is only so much good material.
* Upload - Heavy users often run servers or upload to p2p. Also, even those who don't often use p2p and share back 1:1 ratio as any good netizen.

Download bandwidth simply never comes into play. As upload is far more limited than download it acts as an upper limit for me, and I guess most heavy p2p users. But even then I don't actually use that limit most months, because there is only so much I have time to actually consume. For pure downloaders I can only guess that they have an even harder time using up bandwidth, as they don't use the upload part.

On a similar subject. I think that the whole advertising of X mbit/s download is just a big scam. Find me one person who actually makes real use of their 24mbit/s connection. I have 24/1 myself, but that is because it got freely upgraded from 8/1 without me doing anything and I can tell you that the difference for me is extremly minimal.

Re:Bittorrent is the problem :( (0)

Anonymous Coward | more than 5 years ago | (#23800973)

I have 24/1 myself, but that is because it got freely upgraded from 8/1 without me doing anything
Um, did that make anyone else erect? In Canada, if you are extremely lucky, you can find 10/1 residential connections. Choosing smaller, less-evil ISPs means that you're limited to 5/800 (our incumbent copper provider isn't required to allow the small ISPs access to the full 10 mbit/sec capacities of our pairs). And even in the newest buildings, there seems to be refusal to lay fibre (commercial buildings being the obvious exception). When I hear about Verizon voluntarily putting in FTTH in Buffalo, and then offering FiOS at what is, from my perspective, really reasonable rates, I get jealous.

Re:Bittorrent is the problem :( (1)

grimwell (141031) | more than 5 years ago | (#23801395)

One person suddenly says, "If you actually use your X mbit/s download, how much do you think it would cost the ISP?". That is of course ridicioulus, because who actually use their download 24/7?

Using a connection 24x7 is easy and filling both the upload&download stream is plenty easy.

Offer seeds of the popular linux distros and the latest helix iso.
Put up a Tor router
Or maybe a game server or irc relay

Just three examples off the top of my head that could fill a pipe 24x7. :)

Gimme the bandwidth, I'll use it... gimme the cycles I'll use them.

Re:Bittorrent is the problem :( (1)

potat0man (724766) | more than 5 years ago | (#23800665)

I have to buy another 80GB hard disk a day to fill this line

You know hard drives are rewritable? After watching that HD version of %latest_crappy_movie% you don't actually have to keep it on your hard drive for the rest of your life.

Re:Bittorrent is the problem :( (0)

Anonymous Coward | more than 5 years ago | (#23801253)

No, they could just use QoS to lower the priority of bittorrent traffic. Some bittorrent clients already do that. The lower priority can increase the latency and even dropped packets but neither matter much to P2P transfers.

Honoring QoS settings across providers is the rub.

Re:Bittorrent is the problem :( (1)

Hojima (1228978) | more than 5 years ago | (#23801499)

Or reducing the ungodly amount of profits they receive. The cost of bandwidth is already a hell of a lot more than it should be, not to mention you don't even get what you pay for. And hosting servers can be a pain in the ass, not to mention more prone to security attacks. If a small segment of a p2p network fails, the whole system remains intact. It's robust and need based, which is exactly what we need. And to top things off, p2p prevents bot activity by making it more difficult to forge IP addresses.

Re:Control (0)

Anonymous Coward | more than 5 years ago | (#23799783)

Business wonn't use alternative sources to rapidly deploy data because theres no money in it for other businesses to "help" in the deployment.

The increase in anti-tech tech creates more dollars for the big businesses that provide your connection to the www.

Atleast thats how it feels like here down under. Plans are getting worse and worse, and much more strict in the "acceptible user policies"

So many questions (0)

Anonymous Coward | more than 5 years ago | (#23799559)

One answer: No.

The oldest solution... (1, Insightful)

Anonymous Coward | more than 5 years ago | (#23799565)

is still the best solution.

USENET

Re:The oldest solution... (0)

Anonymous Coward | more than 5 years ago | (#23799597)

I prefer Gopher for downloading Blu Ray rips.

Re:The oldest solution... (1)

Omestes (471991) | more than 5 years ago | (#23802115)

Erm... Read the story just above this one. Usenet fails to think of the children, therefore no Usenet for you.

Also the net neutrality is about the "average" user, I really doubt that the "average" can differenciate between Google Groups and plain ol' Usenet. I'm guessing that maybe >1% of modern traffic is because of Usenet.

Oh and, to fulfill the cliche requirement for all comments, "the first rule of Usenet is?"

Monitoring contributes to the problem (1)

asylumx (881307) | more than 5 years ago | (#23799575)

Seems like monitoring would either cause an additional choke point or add more traffic. Neither option seems like it helps...

Re:Monitoring contributes to the problem (1)

DaveV1.0 (203135) | more than 5 years ago | (#23799719)

OK, your geek credentials are revoked too. Go read up on routers and gateways and promiscuous packet sniffing.

Good technology =/= good business (4, Insightful)

SilentChris (452960) | more than 5 years ago | (#23799583)

In reference to the bandwidth limiting efforts in particular, just because there may be a way to offset technical problems with good technology (e.g. Bittorrent for video/audio) doesn't mean it makes business sense. For an ISP, it may be more economical to simply limit the bandwidth of users (which is easy) than figure out what is really a fairly difficult problem. If:

What we're making now - Cost to implement bandwidth controls - Loss of customers that get ticked off

is greater than

What we're making now - Cost to implement good technology that handles bandwidth more efficiently

most companies are going to choose the former. It makes more business sense.

I'm reminded of a passage in "Becoming a Technical Leader" (great book btw - a commenter on Slashdot mentioned it). Anyway, it's about making the transition from techie to management, and analyzing the differences in thought processes. The author tells a story where a company was designing a system, and the requirements were "Make sure it can recover from one error per day" (or something similar). Anyway, the technical people involved with the project thought it would be better if they could get it to "Make sure it can recover from any error, ever, immediately", as they thought it was a more interesting technical problems. Turns out it cost the company something like $4 million, and in the end they had something that a) the customer didn't really need and b) they basically couldn't sell to anyone else. The moral of the story is that just because there are interesting technical problems, doesn't mean that solving them makes good business sense.

Bad Math (1)

shadow_slicer (607649) | more than 5 years ago | (#23800339)

I agree with you that equation seems to be the way many companies are deciding their technological investments. If you think about it however, you might notice the problem with that equation:

It does not take into account the effect improved technology will have on future markets. Successful businesses focus just as much on the future as the present. Sure the present is important, you botch that and whatever your future plans are, they're worthless. On the other hand it is idiotic to ignore the future. Successful companies look for opening markets or weaknesses in their competition and build up to take advantage of them.

I'm no MBA, but what you really need is a cost benefit analysis and risk analysis. You need to consider the costs of both sides (cost implementing bandwidth controls, cost of implementing technology, cost/benefit of losing/gaining market share, cost of rapid depreciation of added infrastructure), as well as the risks to generate a spectrum of possible futures weighted by their possibility and considering smaller projects that could be implemented to hedge your bets and reduce risk. Then you decide the path that leads to the set of outcomes with the most benefit with a risk that matches that of your company charter.

Re:Bad Math (1)

fast turtle (1118037) | more than 5 years ago | (#23801217)

The biggest problem though is that the C/E/I/O's of American Companies rarely give a damn about anything more then next quarters profits. Because of this growing attitude, long term planning in the States is simply not happening and it indicates why our R&D efforts are simply fading into the past. Most of Corporate America's problems can be directly linked/traced to the profit now above all else thinking that's pervading our culture, which goes right back to the I want it Now cultural thinking (consumerism) that's been pushed by our great and fearless marketing leaders.

The oldest solution... (0, Redundant)

Zil_Daggo (836417) | more than 5 years ago | (#23799585)

is still the best. USENET.

Re:The oldest solution... (3, Informative)

magamiako1 (1026318) | more than 5 years ago | (#23799639)

Zil:

I take it you're new to the internet. USENET is still a point-to-point protocol from A to B, and this is where the problem comes in. You have a significant amount of traffic going over that single point.

With torrent and peer-to-peer distribution, you have smaller amounts of traffic coming from many different points.

Load Balancing, Clustering, P2P--are all technologies favored by the IT industry. If your distribution node goes down, nobody cares because you have others. There's no single point of failure in a peer-to-peer distribution system.

P2P should be used in pretty much every scenario requiring high bandwidth of highly popular media. (Which is actually fairly common on the internet). This drastically will reduce bandwidth costs for the people paying and improve end-user experience.

If we had Torrent before FilePlanet went pay, they probably would have never gone in that direction.

Re:The oldest solution... (1)

aj50 (789101) | more than 5 years ago | (#23799763)

You're missing the point. This article is ISPs introducing bandwidth limits due to some users using "too much" of their unlimited bandwidth. This is mostly a problem for ISPs because they have to pay for all the traffic they direct out of their own network.

Running a news sever allows the ISP to download all the messages once from remote news servers and then only distribute them to customers within their own network.

Re:The oldest solution... (1)

magamiako1 (1026318) | more than 5 years ago | (#23800043)

And how many people use their ISPs news servers to get the content they want? There are various news sites that are extremely popular due to retention time and access to content for a reason.

Re:The oldest solution... (1)

aj50 (789101) | more than 5 years ago | (#23800141)

But if ISPs really wanted to solve this problem, they could set up their server with better retention time and access and advertise their service better.

Blueyonder had a very good newsgroup setup which fortunately still works after Virgin Media inherited it.

Re:The oldest solution... (1)

magamiako1 (1026318) | more than 5 years ago | (#23800743)

aj50:

You fail to address the situation that newsgroup access is "A to B" distribution. In the event that A dies, then B has nowhere to get the file from. Sure, you could cluster A1, A2, A3 and if A1 dies you still have A2 and A3 to deliver to B. But in the event the route between A and B dies, A1-A3 now cannot deliver to B.

Then you still have the situation of A to B being limited on bandwidth. A to B might be fast, but A to B & C immediately cuts the bandwidth in half. You have to hope that B and C use their bandwidth differently or at different times. Perhaps B uses A's bandwidth during the daytime hours, perhaps C uses the bandwidth during nighttime hours.

P2P distribution provides a network that is vastly scalable and redundant. It solves all of these problems dynamically. Mesh networking provides the greatest redundancy for distribution, which is far better than a simple logical star setup (which is what you propose).

If bits and pieces of a file reside on A, B, and C, and D comes in to get that file--He can get it from A, B, or C. A could drop off the map, B and C can still dstribute the same file to D. It's far less likely that A, B, and C are going to disappear all at the same time.

And not only is it fantastic for redundancy, there is *0* drop. In the event your news server dropped, you would have to make a new connection to a new news server that is online and restart the transfer. In a p2p mesh distribution model, the only thing you would see if A went offline? A drop in performance as A is no longer there. But if B and C weren't at full capacity, you could increase the load on B & C to give to D by halves.

There's really no way you can argue AGAINST this method of distribution. Really. The reason ISPs are complaining is that they over sold their infrastructure.

Re:The oldest solution... (1)

aj50 (789101) | more than 5 years ago | (#23800983)

I agree with all your points about an ISP news server being a single point of failure. A P2P system copes much better with the loss of a random node (bittorrent before trackerless torrents excepted).

However, a dedicated news server can take advantage of the fact that some parts of the network are faster than others. My line is limited to 8Mb/s. Even assuming I could upload at that speed, I can only send data at 8Mb/s.

Since the ISP owns and has set up the network, they can set up the news server so that it has more bandwidth.

Assuming A is the news server and has a 100MBit connection to the rest of the ISPs network and has the data and B and C both want it, A can serve data to both B and C at the same time (as well as ten other people).

I don't know what sort of topology an ISPs network typically has. At the outer most edges, it is a star (with all the customer's lines meeting at the exchange). In an ideal situation, placing a news server at the center of the star halves the bandwidth required for distribution compared to a P2P solution. Additionally, it gives the ISP more flexibility, they can more easily find the best way of routing usenet data and place news servers in the most efficient points of their network.

There is no reason that the Bittorrent-esqe way of splitting a file into "blocks" couldn't be implemented in a client server model as well. Already, many download managers will resume an HTTP download and there's no technical reason that the same thing shouldn't be possible on usenet.

Re:The oldest solution... (2, Interesting)

magamiako1 (1026318) | more than 5 years ago | (#23801215)

But your method has one glaring flaw. Cost and complexity. And planning.

One of the really odd things about p2p technology is that it also distributes the cost of distribution. Whereas you could say the ISP could add more, add more this, add more that--in some and many cases this becomes cost prohibitve to have to keep up that infrastructure, especially if the cost is burdened by one entity.

By using peer to peer technology, that cost is distributed across all of the users simiarly--not equally, but similarly. And in torrent's case, it's also pretty good at making sure users that can use higher bandwidth also need to incur a higher cost. If you can distribute more, the protocol allocates more to give you more to distribute.

It's kind of automatic in nature, self healing, distributive, with no single points of failure and no glaring flaws.

The only reason this is even "argued" is because ISPs didn't build the infrastructure to handle this. And the only other reason this is argued is because a lot of people are selfish and feel the cost shouldn't be burdened by them.

But I'd rather have say, 10 cents here, 20 cents there, than the ISP assuming I'm always going to use 20 cents and charge me accordingly for access to their distribution servers.

Re:The oldest solution... (4, Informative)

Tony Hoyle (11698) | more than 5 years ago | (#23799923)

Bittorrent is *not* more bandwidth efficient. It is merely more efficient for the distributor. It uses at a minimum the same amount - normally more in fact, due to its forcing of uploads (many torrents throttle based on upload and few will let you block uploads completely) but it's spread across the users. It's also far slower than other methods.. so is only better if your time is worth nothing.

BT is a major problem the ISPs need to deal with - if you download something over usenet or FTP once it's done it's done. On BT unless you actively kill the connection it'll continue sucking bandwidth... that contributes to something like 60% of average ISP traffic being P2P, and why it's increasingly being blocked.

Re:The oldest solution... (1)

ardle (523599) | more than 5 years ago | (#23800019)

I think you're saying that the problem with BitTorrent is that people don't know how to use it.
This puts the onus on the authors of BitTorrent clients to build reasonable defaults in. Maybe something could be done with the protocol, too?
Seems to me that it's possible that a lot of people haven't got their heads around the concept of sharing yet; they aren't thinking about what happens to stuff after they download it. It's a bit like litter ;-)

Re:The oldest solution... (2, Interesting)

magamiako1 (1026318) | more than 5 years ago | (#23800069)

Tony:

The very fact that you stated "Bittorrent is slower than other methods" of file distribution shows that you have very little grasp on file distribution and limits to bandwidth.

I've had to deal with it directly, as a content producer.

There is not very much more efficient distribution out there than a peer to peer model.

Re:The oldest solution... (2, Insightful)

maxume (22995) | more than 5 years ago | (#23799767)

What are you talking about, there is no Usenet.

Also, try to remember the first rule.

So is this... (1)

master5o1 (1068594) | more than 5 years ago | (#23799603)

Is this what AT&T stands for: Anti-Technology Technologies... Interesting.

Only reading the first few lines of TFA (which I suppose is more than some people would). But it seems that this Internet metering stuff is the same as what has always been in NZ -- 5GB monthly bandwidth +$10 for extra 5gb, etc... Not till about 1~2 years ago did we have 1gb limits and shittier overage 'consequence' - go over and pay 1c/MB, or speed capping back to ~56kbps (we were already on 256kbps~2mbps dsl). Then we increased to 5gb/$10 and now have max speeds of ~7mbps (adsl1 max?). Of course, some plans still have the speed throttling overage crap...

Popstar technologies != great ideas (5, Insightful)

Anonymous Coward | more than 5 years ago | (#23799609)

use wireless networks to reduce the traffic on the wired infrastructure
Wireless networks are useful when there is no wired infrastructure, but if you have a wired network, it is orders of magnitude faster than the wireless option, especially where congestion is a problem. Using wireless to offload traffic from the wired network is like walking to avoid traffic jams.

BitTorrent is excellent for rapidly increasing the availability of popular files while automatically balancing the network traffic
BitTorrent (and P2P in general) is a kludge. Multicasting is a solution. BitTorrent is an inefficient protocol (from a whole network load point of view.) It bounces the same data around the net in unicasts. The swarm control overhead is bigger than it has to be because with slow upstreams you need more peers for acceptable download speeds.

It is a case of technology being held back by non-technical reasons, but please look beyond popular technologies when you make an assessment about desirable technologies.

Re:Popstar technologies != great ideas (2, Interesting)

Wildclaw (15718) | more than 5 years ago | (#23800831)

BitTorrent (and P2P in general) is a kludge. Multicasting is a solution. BitTorrent is an inefficient protocol (from a whole network load point of view.) It bounces the same data around the net in unicasts.
Only when it comes to incredibly popular files. However most torrents have maybe a few hundred peers or up to maybe a few thousand, spread over a huge part of the earth Multicasting does little good in a such a situation.

Multicasting is basically about taking you back to the old paradigm where everyone watches the same thing at the same time. (ok, you can save things so people can watch it later, but it is still the old)

Bittorrent could probably benefit some from pairing peers that are locally close to each other, but that is a tracker problem, not a fundamental flaw with the protocol itself.

The swarm control overhead is bigger than it has to be because with slow upstreams you need more peers for acceptable download speeds.
Swarm control overhead is minimal unless you are running a rogue client. We are talking in the area of a 1% overhead here.

Btw, getting more peers has nothing to do with getting acceptable download speeds. It only spreads what precious little upload you do have out so that peers will be even less likely to trade chunks with you. Actually, if the extra peers you get are seeds, that isn't true. In that case you do get a little extra speed, although not much.

The real solution if you are planning to p2p, is to stop looking at those stupid download X mbit/s number when getting an ISP. Those numbers are only there for people who don't know that upload is what matters in 95+% of all cases. (And in the remaining cases, price is)

Re:Popstar technologies != great ideas (0)

Anonymous Coward | more than 5 years ago | (#23801345)

Multicasting helps a lot when many people want the same data. That is exactly the problem which BitTorrent tries to solve. The bottleneck is mostly the upstream of the source and to some extent the backbone, when the same data is sent via multiple unicast streams. Improving the locality of the transmissions only works when the fastest seeds are only in the network neighborhood, which is unlikely due to the slow upstreams of consumer connections. Lack of locality is a much smaller problem with multicast protocols, because the packets are duplicated after the long-haul connections. Instead of sending a packet from A to the backbone, to B, back to the backbone and then to C (like Bittorrent), multicasting sends the packet from A to the backbone, then it is duplicated by a router and sent to B and C. This removes a hop which is both unnecessary and particularly slow (B's upstream).

Obviously the swarm can only ever have a total download speed which does not exceed the total upload speed of all peers at the same time. The way to increase the download speed for the people who are currently downloading is to have more peers who are only uploading. That makes the swarm bigger than it would have to be with symmetric connections and increases the overhead. Even though the overhead is not gigantic, it is non-negligible, especially on slow upstream connections.

Re:Popstar technologies != great ideas (0)

Anonymous Coward | more than 5 years ago | (#23801957)

Using wireless to offload traffic from the wired network is like walking to avoid traffic jams.
And how is walking not a good alternative? Perhaps this attitude is why the U.S. is the single most obese country in the world, and why gas prices are so high, and why the climate is changing? You had better believe that waking can avoid traffic jams!

Back to the original question, the problem is not with the existing networks the problem is with company's that are not upholding their part of the bargain. The ISP have been given a "franchise" by local and regional governments all over the U.S. and they are not providing the service they should.

Complaints the ISP's make about the Internet becoming two crowded are simply moot because of their failure to upgrade infrastructure. The problem is not with the ISP no matter how much you people bitch. Yes they are evil corporate monsters out to make a profit, get use to it, its not new. The problem is with a lack of governmental regulation.

Since the late 1960's thanks to the right wing movement the U.S. has been slacking off on cooperate regulation. I will not bother with the problems that this deregulation has caused thought out the U.S. economy. The big ISP's need to be regulated. Corporations need to be told where the limits are!!! Limits on corporations are a good thing!! The only people powerful enough to place these limits on the ISP's are the same people who gave them the "franchise" or monopoly in the first place, THE PEOPLE.

People deserve the government they get. Stand up and stop this. Stop bitching on /. and start bitching to you congressman!

Long-term planning? (1)

Bieeanda (961632) | more than 5 years ago | (#23799621)

Simply stated, it's not in management's best interest to think beyond the next fiscal year or two. Massive rollouts of new technology or widening network backbones are simply not cost-effective in the short run.

The other side of things is that bandwidth usage isn't a constant-- much like TV, there's a definite 'prime time' when the networks are under heavy load, and laying new cable or provisioning new wireless devices just to cover those periods is not cost-effective.

There's also the real cost of bandwidth versus the gluttons who insist upon maxing their connections 24/7. Congratulations, guys. You're the reason why they're finally dropping the 'unlimited' charade. You want it unmetered, fresh from the backbone? Try leasing a T1, then get back to us on how cheap we're still getting it.

Re:Long-term planning? (1)

DaveV1.0 (203135) | more than 5 years ago | (#23799743)

Actually, that is not quite right.

Widening backbones and massive rollouts are not cost effective in the short run or the long run. Just ask Verizon about FiOS, assuming they will actually tell the truth.

Smaller rollouts and increasing the backbone in small increments is cost effective and is what is happening. The problem is the usage is increasing faster than the net work can be grown in a cost effective manner.

Re:Long-term planning? (5, Interesting)

Entropius (188861) | more than 5 years ago | (#23799973)

I read an article by a high-ranking Toyota exec in the New Yorker about how, in contrast to American companies, Japanese companies *do* think ten or twenty years in advance. He made the point that they didn't introduce hybrid cars in order to sell to hippies in ca. 2000; they introduced them because, come 2010 or 2015, gas is going to be expensive enough that lots of people are going to want them... and they wanted a mature product -- both from an engineering and from a brand standpoint -- ready to go.

Now, in 2008, Priuses (and Corollas and Yarises) are common on the road in my city, while many of the short-sighted US manufacturers are trying to retool from building 18 mpg SUV's.

The interview mentioned a Japanese business term that has no translation in English; I forget the word, but it meant something like "the faith that building products that people need and selling them for a fair price, long-term, will be profitable, long-term." That might be less true now than it once was, but it's interesting to note that Japanese companies do tend more toward the "Build useful stuff; sell it for cost + profit" model, and American ones toward "Make whatever we can market and sell it for whatever we can convince people to pay".

The main exception to this that comes to mind immediately is Sony, who can go die in a fire. They've got their hands in lots of markets and are thus successful in that regard, but they don't seem to be market leader in any of them. I follow the camera market fairly closely, and Sony's main market in the US seems to be

1) people buying point-and-shoot cameras that didn't do their research, and wind up paying >$100 more than the equivalent Canon or Panasonic that performs better;

2) digital SLR's, which aren't really Sony's; they're rebranded Konica-Minolta stuff who Sony bought out.

As an example of Sony's failing, their top-end bridge camera still doesn't offer any sort of processing controls: you're stuck with a JPG with one compression setting, one saturation setting, one contrast setting, one (excessive) noise reduction setting, etc. There's no RAW mode. The lens is *very* prone to chromatic aberration.

Canon and Panasonic's competitors are cheaper, use superior optics, and offer control over the processing; Panasonic's versions have RAW, and Canon's

But, as a marketing matter, you can't sell stuff like this to Joe Sixpack by saying "Look! Good optics! Controllable processing! RAW mode!", so Sony didn't even bother trying to do this stuff.

Re: Japanese Proverb (3, Informative)

TaoPhoenix (980487) | more than 5 years ago | (#23800601)


"The interview mentioned a Japanese business term that has no translation in English; I forget the word, but it meant something like "the faith that building products that people need and selling them for a fair price, long-term, will be profitable, long-term."

The translation is "Fast Bucks vs. Slow Dimes". America likes This Quarter's Sales. Japan does likes Next Decade's sales.

Short version (2, Insightful)

Kohath (38547) | more than 5 years ago | (#23799647)

Short version:

"I want everyone in the world to behave in a precise (but poorly defined) way to suit my personal sensibilities. Why don't they? Any ideas on how to make it happen?"

Have you tried saying "please"? Other than that, I have no ideas. Maybe try to help people and solve problems instead of worrying about whether things are done exactly your way.

Constructive solution (0, Funny)

Anonymous Coward | more than 5 years ago | (#23799651)

Kill all the lawyers.

Sincerely,

Bill Shakespeare

upsetting the apple card (5, Interesting)

terryducks (703932) | more than 5 years ago | (#23799653)

Follow the money. The ones with (power|control|money) want to stay on top and it's only the ones with better agility that corner the market and then become the top dog. So you're looking for a technical solution for the wrong problem.

What's the problem ?
IMHO, it's the "last mile". Legislated limited monopoly controlling access with an interest in keeping that position. so there's a high barrier to access put in place.

Some of the other problems is what may work in a high density area will probably not work in a low density area. A wireless mesh may work in cities and towns but completely fails in rural. Another issue - making data retrieval a crime. "you're" responsible for someone else's actions and that kills any open public access. Some one has to pay to connect to the backbone.

If I had a solution that would work in all cases - I'd be rich :p

Here's a lynchpin that needs to be remove - the last mile monopoly and its bundling with "providers". Here in the Northeast (US) the power line is a separate charge on your power bill than the generation. Break that up. Internet access "line" charge $0.02 per month. ISP charge $x. Anyone should be able to send data over the lines without the big guys restricting access - for the same cost. NO AT&T ISP should be able to send data cheaper than another ISP.

It may be time for $TOWNs to own the lines, bid repair out to another party and anyone to sign up to an ISP.

BUT it won't work. See any telcom endevor.

The Duck

Money (1)

nova.alpha (1287112) | more than 5 years ago | (#23799661)

It's all about money, not user experiences, technology or something else. Things that you have described require serious investments (infrastructure, employees, servers, power, etc) and large companies would not do this unless the absolutely must. Now they don't.

i once invented anti-technology technology... (3, Funny)

canipeal (1063334) | more than 5 years ago | (#23799691)

couldn't figure out why the darn thing kept blowing itself up....

not the best terminology (0)

Anonymous Coward | more than 5 years ago | (#23799709)

The term "Anti-Technology Technologies" chosen by the poster is too neutral to do justice to the field of metering software. How about Anti-Motherhood Technologies, which provides the slight additional emotional context which would facilitate rational discussion.

What about... (1)

Idbar (1034346) | more than 5 years ago | (#23799711)

I'd let them measure, but I was wondering a couple of things:

Users that don't know much about internet, are those thinking they just look emails, now, what kind of emails? I've seen people (still) sending 40MB files attached to emails.

A virus, popups and advertisement, download flash animations that people would believe they can't be charged for. How do the companies will deal with the "advertisment" issue, given that most of the advertisement these days is heavy and flash based? Moreover, how do they deal with viruses?

People normally buy a wireless router and place it there, if it works, works. Now, your neighbors can steal your signal and use internet for whatever they want. Now, I know is customer responsibility, but then, are they planning to track down people in such cases and start legal action against "people using open wireless networks"?

Just wondering.

Re:What about... (4, Interesting)

Idbar (1034346) | more than 5 years ago | (#23799723)

BTW, is Microsoft paying for the constant annoying updates of its OS, as well as Apple for the annoying connections of Quicktime (and iTunes) and Acrobat for their automated downloads too?

Re:What about... (1)

nurb432 (527695) | more than 5 years ago | (#23800699)

Nope, when we go to metered service you get to foot the bill for that 200mb service pack.

Us FreeBSD people that like to use the ports tree to be current will be screwed too.

Re:What about... (2, Interesting)

FailedTheTuringTest (937776) | more than 5 years ago | (#23799991)

The problem you've identified is similar to the spam problem: I could not only annoy people, but cost them money by sending them large unsolicited emails.

A solution for this would be to charge for traffic, but charge the broadcasters, not the consumers. Home users would pay nothing for bytes received, but would be charged for every byte they send -- which is negligible for most home users but would cost prolific file-sharers and people running web sites on their home machines. (As a side effect, this would tend to discourage home file sharing.)

Large web-based businesses would see their costs go up, and those costs would be passed on to the consumer -- the price of downloading music from iTunes would increase, and Apple would pay their ISP, which would use that money to keep their network up, instead of home users paying the ISP themselves.

Re:What about... (1)

nurb432 (527695) | more than 5 years ago | (#23801377)

The problem you've identified is similar to the spam problem: I could not only annoy people, but cost them money by sending them large unsolicited emails.
Or the random DoS.

All right, that does it (4, Insightful)

DaveV1.0 (203135) | more than 5 years ago | (#23799713)

I hereby revoke the shanen's geek credentials for failing to understand that single source versus multiple sources doesn't matter if the problem is the total volume.

The problem is not that on server or site is overloading. The problem is that the provider's network, including things like routers and gateways, have a finite bandwidth and these applications, regardless of source, are using up most of it.

Ever hear the phrase "You can't put 10lbs of shit in a 5lbs bag"? Ever wonder why they put in new water mains and increase the size of water mains when the build more housing developments? Or why the widen roads with more housing? It is because the total volume has increased.

Re:All right, that does it (2, Funny)

SoapBox17 (1020345) | more than 5 years ago | (#23799857)

We all know the internet is a series of tubes, like water pipes. But you aren't thinking outside the box. Instead of building larger water mains, these cities should just use catapults to throw "packets" of water through the air to parts of the city, thus reducing the load on the old "pipe" infrastructure.

Re:All right, that does it (3, Insightful)

iamwahoo2 (594922) | more than 5 years ago | (#23800341)

The applications are not using up most of it. Just their share. If twenty people are sending bits over a line, then the bandwidth can be divided up evenly between the twenty. If two of these people are torrent users downloading 4Gb files and they remain online until peak hours when the number of user jumps to 200 people, then they should only get 0.5% of the total bandwidth. If we kick them offline, then there are still 198 "normal" users on the line and it is still congested at peak hours.

The problem for most users is the amount of available bandwidth at peak hours. If some guy is sucking up tons of bandwidth at non-peak hours, then he is not hurting anybody. It is not like we can take the unused bandwidth from non-peak hours and use it during peak hours.

The telecoms have not been able to follow through on their bandwidth promises during peak hours and they have managed to push the blame onto someone else. Now that people have bought into that excuse, they are going to try to make a few extra bucks off of it.

Quite honestly, I have no problem with people who use more of a service getting charged more, if that is your business model. The phone companies have been charging for long-distance by the minute for years. But if we are going to start charging on a per bit basis, then shouldn't I, as a person who sends fewer bits, get a lower price? Or at least get to carry my bits over to another month? See, they want to treat each customer different based on what benefits them the most, and if it were not for their monopoly positions, they would not be able to get away with this.

Re:All right, that does it (0)

Anonymous Coward | more than 5 years ago | (#23801123)

Part of the reason the ISPs don't like torrenting is that the TCP/IP model actually gives then more then their fair share. If I open a youtube page, I open one TCP stream to my ISP. If you start a torrent and connect to 99 other peers, you open 99 streams. The TCP/IP packet scheduling algorithm then allocates you 99% of the available bandwidth, and me only 1% (assuming your peers can saturate the connection).

The thing that really forces the hand of the ISPs is that currently there is no protocol for splitting up the usage between two such customers to 50-50. Comcast tried to do this via faking reset packets to torrenters (which is illegal, because rather then cut the streams out themselves, which is technically complicated, they forged reset packets to make it look like the other end of your connection sent them) and got sued. A potential solution which would be cheap is to update the TCP/IP protocol, which was not designed for the P2P usage model, and was in fact created in 1982 to alleviate the same bandwidth shortage problem we're encountering right now. If it could be updated to split bandwidth equally between, say, ip addresses rather then streams, then the problem would be largely reduced. The problem is that forcing adoption of a new protocol like that is quite a lot more difficult now then it was back when they came up with TCP/IP.

Re:All right, that does it (0)

Anonymous Coward | more than 5 years ago | (#23801871)

1) Why would anyone want to sign up for an internet service where everyone is guaranteed the lowest common level of active service (eg ~50bps)?

2) Fixed Costs and Variable Costs occur with pretty much any good or service. Fixed Rate Billing favours those who consume high variable costs. Variable Rate Billing favours those who consume fixed costs. One way or the other, ONE OF YOU will SUBSIDIZE THE OTHER.

Re:All right, that does it (1)

DaveV1.0 (203135) | more than 5 years ago | (#23801635)

The only problem with your explanation is that it is wrong. It does not reflect TCP/IP. It more accurately describes line multiplexing.

If one has 200 users using TCP/IP, each of those users will have a different amount of bandwidth depending on the number of connections they have open.

The AC that replied to you give a good example.

Re:All right, that does it (0, Flamebait)

shanen (462549) | more than 5 years ago | (#23802137)

I'm going to pretend that you're being humorous rather than a purely non-constructive arse with a tired wit. However, because I don't have much of sense of humor, I'm not going to waste a lot of time on you. Nor am I going to waste any time defending my geek credentials.

Much of the problem is the so-called last mile. This is precisely where wireless networks could address much of the bandwidth constraints. In conjunction with active local caching and BitTorrent or similar protocols, the entire situation can be changed and improved. Perhaps a concrete example is the best approach for your stiff head. Consider the situation where a lot of network traffic is consumed with video such as the Daily Show. Imagine that you do the initial distribution with BitTorrent, effectively caching the local copies for the relatively brief period of high interest. Most obviously, you greatly reduce long-distance network traffic from the central location, but with the local use of wireless networks, a great deal of the traffic will be completely off the wired network, since it will be distributed within dense urban hubs. They could still have their DRM (several mechanisms already exist)--and even date the files for expiration (though I believe that would be an extension into the area of BitTorrent-like protocols).

Actually, one of the new technologies we need is variable-power wireless networks (with more range than BlueTooth). By scaling down the transmission power as the node density goes up, you can effectively maintain local bandwidth as a constant. The total power consumption does increase, but that's because you have more nodes, but the main constraint on efficiency is how effectively you can cache the data. Interestingly enough, time-limited video is especially suitable for distributed caching.

Limit The Power of Corporations (1)

b4upoo (166390) | more than 5 years ago | (#23799729)

I have remarked in the past that I am not sure that any government can really allow the free flow of information.
      And there is little we can do about the nature of government but the second player is big business. We can and should limit the power of corporations and punish them when the work against public interests by doing such things as limiting the flow of the internet. People have rights. Corporations should not enjoy the same rights as people do, For example the directors of a large corporation all have one vote just like the common man. How is it that they are allowed undue influence by hiring professionals to lobby for their interests? Lobbying and bribery and corruption are pretty much identical terms in most cases.

Re: Limit The Power of Corporations (0)

Anonymous Coward | more than 5 years ago | (#23800763)

Er...why shouldn't they be allowed to hire professionals to lobby for their interests?

I certainly can.

exchange of information (0)

Anonymous Coward | more than 5 years ago | (#23799737)

I thought the exchange of information over the Internet was supposed to be a good thing? Couldn't we use technology more constructively? For example, if there is too much network traffic for video and radio channels, why don't we offset with the increased use of P2P technologies like BitTorrent?

You argue for the exchange of gigabyte-sized disk images by the exchange of information?

Simply exchanging knowledge doesn't clog the tubes.

Simple reason (2, Insightful)

Oktober Sunset (838224) | more than 5 years ago | (#23799751)

why build more infrastructure to serve customers if you can find new ways to make them pay for the infrastructure you have now.

Re:Simple reason (1)

mgblst (80109) | more than 5 years ago | (#23801025)

Yes, they should just spend the $10-$100 billion of new infrastructure, just to keep some fuckers happy. I can't understand why they wouldn't just do that. I mean, they wouldn't get any more money from us, but still, it is only $10-$100 Billion, who cares?

Re:Simple reason (1)

The End Of Days (1243248) | more than 5 years ago | (#23801483)

Well, you know, they're rich. We deserve to have them spend their money to keep us entertained on the cheap. I think it's in the Constitution or something.

Bittorrent looks for lots of sources .... (0)

Anonymous Coward | more than 5 years ago | (#23799755)

Bittorrent looks for lots of source, not the closest sources. That is an issue with the current protocol and how it tries to find where to download from. Only recently have people started looking into how to improve the efficiencey of the network usage.

Re:Bittorrent looks for lots of sources .... (1)

Entropius (188861) | more than 5 years ago | (#23799995)

To be fair the Bittorrent devs have had to waste time doing things like protocol encryption and distributed hash tables to protect swarms against traffic monitoring and people who send lawyers after trackers. Resilience (vs. lawyers and campus Network Services departments) has taken priority over efficiency for the devs.

Anonymous Coward (0)

Anonymous Coward | more than 5 years ago | (#23799879)

MAybe it's time for some inteligent journalists and editors at the times.

"Time Warner would not reveal how many gigabytes an average customer uses, saying only that 95 percent of customers use under 40 gigabytes each in a month.That means that 5 percent of customers use more than 50 percent of the networkâ(TM)s overall capacity, the company said, and many of those people are assumed to be sharing copyrighted video and music files illegally."

The whole article is about online video taking up all the tubes, then they throw in unsubstantiated claims about piracy being the cause. The more these big media companies try and play the "piracy" card the less i believe that there is a bandwidth apocalypse coming and the more i think they just want the "cash flow from overcharging customers" line on the cash flow statement.

You must be new here (2, Insightful)

billcopc (196330) | more than 5 years ago | (#23799957)

If you had been paying attention at all, you'd understand the purpose of these "anti-tech techs" as you call them is explicitly to limit progress so the rich old fucktards can continue milking their obsolete business models until they retire or drop dead.

To many people, progress is a scary, dangerous thing. Money, on the other hand, is a sultry lover that drives their every passion. Us folks on slashdot may prefer cheap plentiful bandwidth over money, but we're a tiny little minority in the grand scheme of things. The average Joe doesn't understand technological evolution, and most certainly does not see where it is all headed... it is far easier for Joe to stay ignorant and pay up.

Re:You must be new here (1)

The End Of Days (1243248) | more than 5 years ago | (#23801507)

You're missing the big picture, chief. You (minority of) folks on Slashdot want everyone else to pay for your bandwidth. All you have to do is pony up the cash yourself. There's no justice in making my dad subsidize your desire to pirate every anime ever made.

Re:You must be new here (1)

Omestes (471991) | more than 5 years ago | (#23802233)

You're missing the big picture, chief. You (minority of) folks on Slashdot want everyone else to pay for your bandwidth. All you have to do is pony up the cash yourself. There's no justice in making my dad subsidize your desire to pirate every anime ever made.

You are wrong, a MAJORITY of folks on /. want to pay for what for what they were told they are paying for. If I'm paying $50/mo for unlimited service, then I expect it to be unlimited service, not limited by what I chose to do with the service I'm paying for, or which content provider pays the service more for priority.

It really has nothing to do with anime or piracy. Its a far more basic question, should ISPs have to live by their own words, or can they lie their teeth off and justify it with the fallacious "piracy bad" excuse.

Also, the question here is why are they spending more money going after their customers, than, you know, fixing the infrastructure to handle everyday traffic.

Its getting to the point where I think corporations are actively against their customers, they'd prefer to have no services whatsoever, and still force us to pay for them.

bullshit (2, Informative)

speedtux (1307149) | more than 5 years ago | (#23799967)

I thought the exchange of information over the Internet was supposed to be a good thing?

It is. And that's why it's a good thing if my neighbor is discouraged from eating up 99% of the bandwidth with hundreds of simultaneous connections while I'm trying to work over ssh, or if he is at least made to pay for the necessary upgrades to our shared wire.

Why don't we use wireless networks to reduce the traffic on the wired infrastructure?

Let us know if you come up with something that works. Having suffered through WiFi-based home Internet access for a few months, I certainly don't want to go back. Of course, it kind of caps your bandwidth implicitly.

For example, BitTorrent is excellent for rapidly increasing the availability of popular files while automatically balancing the network traffic, since the faster and closer connections will automatically wind up being favored.

P2P and BitTorrent are horrifically wasteful because the same packets keep traversing the same wires. And they seem fast to you for file distribution because they make many connections and grab an unfair share of available bandwidth.

Instead, we have an increasing trend for anti-technology technologies and twisted narrow economic solutions such as those discussed in the NYTimes article

First, perhaps you could show us some evidence that there is an "increasing trend".

Then you might discuss how today compares to, oh, 20 years ago and 10 years ago in terms of maximum throughput, latency, and cost per megabyte.

As for P2P, combined with standard Internet protocols, it really is a technological disaster, even if it is a social success.

Re:bullshit (1)

Entropius (188861) | more than 5 years ago | (#23800021)

P2P is a technological success in reducing the load on any one point on the network. If you make the assumption that the cost of bandwidth grows nonlinearly, then it's highly useful for e.g. Ubuntu releases.

Re:bullshit (0)

Anonymous Coward | more than 5 years ago | (#23800317)

Yea because the general population of the world really gives a shit about Ubuntu.

While a neat tool for those using it, Torrents and Swarm P2P in general screw over everyone else on the network because of the way TCP/IP is currently implemented.

Re:bullshit (1)

speedtux (1307149) | more than 5 years ago | (#23800471)

P2P is a technological success in reducing the load on any one point on the network.

Reducing relative to what? Heck, USENET is a more efficient distribution mechanism than P2P.

If you make the assumption that the cost of bandwidth grows nonlinearly,

Using 'em big words again without knowing what they mean, eh?

Re:bullshit (1)

ClassMyAss (976281) | more than 5 years ago | (#23801127)

P2P is a technological success in reducing the load on any one point on the network.

Reducing relative to what? Heck, USENET is a more efficient distribution mechanism than P2P.
I think he meant that P2P reduces the load on the original distributor of the information, which is definitely true as compared to hosting something on a website. It's good for the distributor, bad for everyone else (though faster with a decent swarm). Your counter about Usenet is true as well, Usenet is actually pretty efficient overall since all the data is effectively cached at each ISP (I think - not too certain of the technical details). However, this only works as long as the overall volume of Usenet traffic is low enough for it to be stored nearby, so it's definitely not scalable in the way most P2P solutions are.

Re:bullshit (1)

tylernt (581794) | more than 5 years ago | (#23800359)

it's a good thing if my neighbor is discouraged from eating up 99% of the bandwidth with hundreds of simultaneous connections while I'm trying to work over ssh
Let him. Just make sure the router and switches prioritize traffic per-user, based on the number of packets they've sent/received in the last hour or 24 hours. You'll neither notice nor care if that other 99% is totally utilized or not used at all because your trickle of SSH packets will always zip to the front of the queue ahead of of your neighbor's torrents.

If I were an ISP, that's what I would do.

Re:bullshit (1)

speedtux (1307149) | more than 5 years ago | (#23800519)

Let him. Just make sure the router and switches prioritize traffic per-user, based on the number of packets they've sent/received in the last hour or 24 hours.

That's not sufficient because the same bottlenecks occur all over the network, so this kind of logic needs to be deployed in all routers. In addition, some people are willing to pay for sustained 50 Mbps, so you need traffic classes. And to make it all work, you need more than the current TCP/IP protocols. So, although you may not be aware of it, you're basically saying that people should do what they're already trying to do.

But, actually, I think simple volume pricing is, in fact, preferable, because the "just make sure" you propose means that ISPs get much more fine grained control over traffic than they have right now.

Re:bullshit (0)

Anonymous Coward | more than 5 years ago | (#23800827)

I would actually consider his method superior to yours, because at least I'm getting the 15/15 that I paid for.

I sincerely don't give a fuck if I'm taxing somebody else's bandwidth, it's the ISP's problem, not mine.

Selling a 100mbit line and then saying "Oh, you're using a little much" would never fly in the business world. There's no reason why consumers shouldn't take what's been promised to them.

Re:bullshit (1)

speedtux (1307149) | more than 5 years ago | (#23801049)

Selling a 100mbit line and then saying "Oh, you're using a little much" would never fly in the business world.

Not only does it fly, it's the usual way business plans are sold: by connection speed and monthly traffic volume. It's also how the ISPs do business with each other.

Your government should shut this down (2, Insightful)

GraZZ (9716) | more than 5 years ago | (#23800095)

The internet providers were given massive tax breaks to improve their networks (fiber to the home and whatnot). Now they not only haven't done that with the money, but the inferior networks they've built instead are reaching capacity.

Somebody should make your ISPs sleep in the bed they made.

I also notice that the TFA appears to reference only cable companies. Cable internet shares bandwidth to the endpoint, a pretty bonehead move if a significant number of endpoints are going to be using it. Maybe this is simply the end of that technology's ability to improve. DSL and FTTH vendors could then capitalize and crush those companies, improving internet access for all. What is stopping this from happening (besides laziness)?

Re:Your government should shut this down (1)

nurb432 (527695) | more than 5 years ago | (#23800577)

What is stopping this from happening (besides laziness)?
Effective monopolies.

technology (1)

fermion (181285) | more than 5 years ago | (#23800563)

Technology is the systematic recording of who to do things.

If I use technology to build a missile, and then use technology to build a laser to shoot it down, is one technology and one anti-technology? No, both are simply the application of the techniques learned and taught.

Furthermore it would be difficult to know which is the technology and which is the anti technology. If I can't go about my work because kids are downloading pron 24/7 and clogging the pipe, even though the attempt is made to balance the traffic, then is technology to unclog the pipe a good or bad thing? The technology used to unscramble the picture and catch the guy that was abusing little kids, was that good or bad?

There is a sense of entitlement that is pervasive is our culture, a sense that we somehow have an inherent right to any technology. Not only a right to the technology, but a right to have someone produce at the price we want to pay, in the form that we want, and as soon as we want it. If this isn't possible, the government should subsidize it. We see this with gas. If we don't get waht we want, we whine.

The problem with this is that not all technology is good. We see this with medical supplies. We see this with cars and SUVs. A bit more time, a bit less greed, a reduction in the sense of entitlement, we might have technology that is helpful and not just cool.

Re:technology (0)

Anonymous Coward | more than 5 years ago | (#23800927)

First off, take a history lesson,
          Americans have always had a sense of entitlement, even as a fledgling nation. And as mentioned before, the reason we whine is because we have limited control over the corporation to influence the way in which they produce what we need. It is the fundamental right of a US citizen to complain to our friends, to our relatives, to our congressmen about the current situation.
        Do you recall your mother ever asking for a mini-van? I don't, in fact, our 1975 Beauville was a beast of a vehicle, and got great gas mileage, but because some executive needed to raise the quarterlies they created a new product which was marketed as the greatest thing and now we don't know what is wrong with it because the damn thing doesn't have a base of understanding equal to speed at which it was produced. Reference discussion of Japanese Business Model, long term planning works the best and how is that done? By whining and DIALOG.

"exchange of information" (1)

nurb432 (527695) | more than 5 years ago | (#23800567)

"I thought the exchange of information over the Internet was supposed to be a good thing? "

Only if that information has been properly sanitized by the government and you pay a licensing fee to consume it.

Otherwise, its evil.

FM (1)

westlake (615356) | more than 5 years ago | (#23800597)

You may remember how FM radio was delayed for years

FM radio was delayed for years because the enormous amounts of money being generated by RCA's investment in AM broadcasting was funding the development of the infinitely more disruptive technology of television.

Then there were the minor setbacks of World War Two and Korea.

FM doesn't come into its own until the Hi-Fi craze of the mid to late 50's. The LP. Magnetic Tape. Heathkit for the budget-minded hobbyist. H.H. Scott, Marantz and McIntosh for the audiophile.

Duh (1)

aleph (14733) | more than 5 years ago | (#23800841)

The poster makes it sound like the sky is falling. OMG, if you download terrabytes of data/month on your residental account they might *gasp* charge you!

Australia has had metered plans pretty much since inception. Most are of the "XXgb then shape to 128kbps" variety. There have been companies offering unlimited, but they either go under, or oversell at a horrendous ratio.

If the cost of bandwidth, as a proportion of operating cost, goes up for the ISP, then something has to break. Either they introduce some type of allocation (metered plans) or the overall quality of service needs to go down (they oversell more). It's not some grand conspiracy, it's *possibly* money grubbing, but it's far more likely just trying to keep on top of a ballooning cost.

Really, unless you're streaming hdtv 24/7. it's *not* a huge issue. I have 60gb/month in total, and even when I'm leeching a bit I'll be lucky to go through half of that. Often I'll only use a couple of gigs. And I'm still considered a _heavy_ user ffs.

Re:Duh (1)

RAMMS+EIN (578166) | more than 5 years ago | (#23801579)

``The poster makes it sound like the sky is falling. OMG, if you download terrabytes of data/month on your residental account they might *gasp* charge you!''

That's fine, but then they have to be upfront about it. If they tell you you get so many bits per second, you should be able to expect to get that many bits per second. If they throttle your connection or send you extra charges if you generate more than some set limit of traffic, without having told you they would do so, you are not getting what you signed up for.

Of course, the ISPs can get around this by saying something generic like "Fair Use Policy" or "we reserve the right to ..." or "indicated speeds are maximum speeds", etc. But, in that case, what you are really signing up for is something very vague, where you may or may not get the speed that was advertised. I would much prefer if the ISPs told you exactly where the limits are and what happens when you exceed them, but if you want to sign up for a vague unlimited-but-not-really plan, that is your choice.

What annoys me far more is ISPs that outright block certain traffic (such as TCP port 25) or simply offer bad service (e.g. routing problems a few hours each week). I understand that it costs the ISP money if people generate a lot of traffic, and I would actually be happy to pay on an "amount of traffic" basis, but if I do pay you, I expect you to do your job. I can understand occassional mishaps, but outright refusing to handle traffic or frequent outages are enough to send me looking for alternatives.

Internet bandwidth costs money. (1)

Brett Glass (98525) | more than 5 years ago | (#23800947)

Many of the posters here, including the one who authored the original article, seem to be forgetting a very simple but important point: bandwidth costs money. A lot of money, in fact, if you're an ISP outside a major city. Many ISPs pay $100 to $300 per megabit per second per month for their bandwidth. Can they afford to give bandwidth hogs unmetered, unrestricted access to it? Of course not! Add to this the fact that TCP/IP is the most inefficient way yet devised to distribute media (a simple analog TV tower is millions of times more spectrally efficient) and that P2P is designed to eat up many times the bandwidth required to transfer the data to the user (because the user's computer becomes a server), and it's no wonder that providers are concerned. Regulations that prohibited ISPs from throttling P2P, or from implementing pricing tiers, would sting the telcos and cable companies (which can cross-subsidize from their other services) but would flat out kill their smaller, independent competitors, leaving a cable/telco duopoly. So, be careful what you wish for. We all like to get a good deal, but if you ask the government to legally mandate that people give you expensive stuff for nothing, do not be surprised when they go broke in a hurry. For more, see my remarks to the FCC at http://www.brettglass.com/FCC/remarks.html [brettglass.com] .

Re:Internet bandwidth costs money. (1)

tengu1sd (797240) | more than 5 years ago | (#23801709)

>>>Can they afford to give bandwidth hogs unmetered, unrestricted access to it?

They don't give access. They sell it. Now they're complaining that someone is using the bandwidth that been paid for. Now most providers provision X bandwidth and (Y x X). The problem is users are beginning to ask for the X they were sold.

Uhh. What? (0)

Anonymous Coward | more than 5 years ago | (#23801229)

You may remember how FM radio was delayed for years
Does anyone know anything about this delay? This is the first time I have ever heard of any effort to suppress FM. Some citations would be nice.

Wireless to relieve wired network stress? (1)

RudeIota (1131331) | more than 5 years ago | (#23801609)

TFA: "Why don't we use wireless networks to reduce the traffic on the wired infrastructure?"

Well, eventually there *is* going to be a wire. :)

Making high-bandwidth *wired* infrastructure affordable should be our priority, since cost seems to be the biggest issue with last-mile lines... That's really where most of the traffic issues present themselves... When you have 200 people sharing a single hub.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...