Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Almighty Buck

P2P Will Lead To Higher ISP Charges? 108

Lumpish Scholar writes "This Interactive Week article suggests that P2P is a wonderful thing, the direction the Internet is going ... and utterly breaks ISPs' business models, to the extent they may raise their monthly rates, or at least offer two-tier plans that will charge some users more. If true, ISPs might be seeing cost increases from two directions: more dialup ports (because users are staying on longer, so peak usage increases), and fatter pipes to their upstream or peer ISPs. On the other hand, to quote the article: "We're seeing greater decreases in the cost of the bandwidth than we are seeing increases in individual bandwidth usage." A price increase might or might not be justified, but Slashers will surely be interested if increases are coming." People have been making this arguement for a while - remember when web surfing started to become common, and people stayed on for longer, the ISPs claimed the same things.
This discussion has been archived. No new comments can be posted.

P2P Will Lead To Higher ISP Charges?

Comments Filter:
  • The hype around P2P is valid," said Larry Cheng, an associate at venture capital firm Battery Ventures. "It's changing the nature of Internet computing."
    Ok guys, I conceed, all the hype around P2P is absolutely valid, not! Sheesh, it was also VCs that pitched eToys.com, ePets.com, Amazon, and many others as being worth billions. Earnings? What do we need earnings for?

    Anyone remember when "push" technology was going to be the next big thing? I predict P2P is going to go the same way as push went, mostly out the window with a lot of money chasing after it. Can anyone actually tell me the benefits of P2P, over and above client-server, in actual and real world applications? Sure, we have GNUtella, Napster (sortof), and countless others, but these all basically just exist because piracy is illegal and it's P2Pishness helps skirt the law.

    Now I'm not saying that P2P has absolutely no uses, but until someone comes up with a substantial and legitimate use, I see absolutely no reason to believe the hype.
  • Hit the limit and you'll get a message saying you're cut off. I already have.

    Or, like many cable-modem customers are finding out, your packet loss will suddenly go through the roof. This is at least a little bit fairer than shutting people down.

  • Some years ago they thought they would get back their investments by bringing "content" that would pay for their net.
    They failed miserably by competing against other ISP's with costs for their services rocketing without any results. The only outcome - a total block of good multicast services.
    If ISP's aren't coming up with good multicast services across networks this will only lead to higher bandwith costs for alternative services that will be pushed onto users.

    Internet is not TV, but I don't think anyone would compare a slow P2P connection to a fast mutlicast connection whatever the content.
  • The way things are right now, most people are being charged 19 clams(US) for terrible service, while their pals on the same service are getting better performance. Perhaps it is time for a change in the way ISPs deal with charging folks, but I'll tell you what, I'll start my own before I pay anymore for the POS throughput that I have to deal with.
    Frontier Communications [frontiernet.com]
    Don't believe a word these people tell you. They may say that you are ready for DSL, but that doesn't mean that it is true... Well, if you want to know why I gripe about this, take a look at the site yourself: Frontier Lightning Link [frontierli...nglink.com]
  • by swb ( 14022 ) on Tuesday February 27, 2001 @05:38AM (#399067)
    I don't see how any ISP can make any money offering xDSL for a fixed rate. I know the gamble is most lines sit idle most of the time (mine averages 58/bytes second, w/BIND, Sendmail, Apache and three PCs connected), but at the same time it only takes a a few dozen people maxing their lines out to seriously dent an OC-3, and not every ISP has that much bandwidth.

    The idea of metering isn't as bad as it sounds -- one thing that keeps ISPs from rolling out serious bandwidth (1.5Mbps SDSL for everyone) is that ISPs are afraid that they won't be able to meet the demands of that much bandwidth because the marginal income outpaces the marginal cost. Charging by the packet, ISPs could easily offer as much bandwidth as you want WITHOUT having to worry about where the cash for the next OC-3 will come from.
  • I have a cable modem and the expertise to run a mail server. Still, I don't care to. Here's why.
    • Unless you also do your own DNS, your friends get to email you at bob@linenoise.city.state.provider.com instead of bob@provider.com
    • Suppose you do your own DNS so you can have your vanity domain? That's now at least two services you need running, and low-port services too. You'd better know what you're doing, and even if you do, I'm honestly too lazy for it, considering that I get free mail from the provider - and usually a few POP boxes, too.
    • Suppose you do your own DNS, and the power goes out. When it comes on, your DNS server has a different IP. Unless you're paying for a static IP, there's no guarantee you'll get one. Furthermore, you generally (not always) get an IP from the provider via DHCP, and if you have a static IP, the router will always assign it to you. But if it were to lose your profile, you go back to the pool with the rest of the schmucks.
    • Even if I were to decide that I want to run my DNS and mail out of my home, I guarantee the neighbors wouldn't want to, and I doubt they could tell you what DNS stands for. They want instructions on paper from the provider, and that's all. They could care less about some wild-eyed geek's rantings about the wonders of commodity broadband. They get the cable modem because webpages load faster.


  • Well, under the old ISP model, you could have 500 users. You would assume, and usage statistics would back this up, that only 10% of them would be on at a given time. So, you buy enough bandwidth to handle 50-60 users. Each user gets their 56k of bandwidth and are happy.

    Now, you have 500 users with 300 of them staying on and active all at once. You have to buy more bandwidth to provide the user that 56k of bandwidth that they are used to getting, otherwise you start to lose customers.

    This is different from the user saying that they would like 128k of bandwidth and how much extra would that be.

    Yes, a large quantity would cost more, but why should they have to pay more for their small quantity just because all these people you wooed over to your service are now using it, cutting into your profit margin.

    Or you could be like my cable modem company and just not increase the bandwidth and laugh as the profit margin grows!
  • That's only if you make the assumption that the majority of peer to peer connections are between users of the same ISP. In the Real World (TM), I would guess this is usually not the case. Services like Napster generally make no distinction between a user next door to you and one half way around the world (except in ping time, and that's only if you find the file you need both locally and from someone in a remote location, which is probably still more infrequent).

    Peer to peer doesn't imply any sort of geographic proximity, it is just a description of how two clients interact with one another.
  • when I get cornered into tech support on a slow day the number one complaint is that they can't get their email and then most of them use Outlook Express (which sucks on Windows), it'd be scary to have normal non-techie people running mail servers.
  • by FallLine ( 12211 ) on Tuesday February 27, 2001 @08:11AM (#399072)
    Well there seems to be a lot of vagueness in terms of defining P2P and lack of clarity in examining it, especially from those who hype it. I am well aware of most applications of it and more traditional applications. However, when someone hypes P2P as being a new and earth shattering thing, they're invariably refering to Napster-like technologies. This VC certainly is.

    When it comes to Napster-style applications (insofar as they are "new"), I have very specific doubts, please refer to my reply to one of the other replies to my comments, if you want to hear them. As for the vague notion of P2P that is hyped, I find they're almost always: old applications with buzzwords attached, ill-concieved ideas, or simply undefined.

    Let me just briefly hit some of your bullets:

    Decentralized -- no single point of failure (in true P2P, things like Napster have a central server that can fail, but Gnutella doesn't)
    Well I'd argue with GNutella, every point IS a point of failure, not just a potential one. Where are the practical and feasible applications of a truely decentralized application of this nature?

    Distributed -- don't have lots of people all hitting the same resource at the same time.
    So what? With bandwidth costs dropping, why bother with such a hokey structure? If repositories like Tucows, mp3.com, various mirrors, and many others can survive, what is the need for it?

    More information -- the contributions of each person are added to the network, not just some central server
    Again, this sounds great in theory, but what does this mean in reality? First, that same information can be added to a central server, so the argument is largely an economic one. Second, do we really want "information" from people? The simple and brutal fact of the matter is that most people are not artists, are not terribly intelligent, etc, thus I question the value of a decentralized system. There is a large need for filters, to pickout things out value and to reduce the S/N ratio. Centralized techniques take to this much better...

    More storage space -- probably Exabytes or Petabytes of storage space available on a large P2P network. Try getting that in one box :)
    Again, costs are plummeting. What's more, even though the total amount of storage space may be greater on a widely distributed network, is the value greater (or less)? In other words, how much of the data is going to be redundant? Generally, most of it. How much of the data is going to be of sufficient quality? Very little.

    In short, I have lots of questions and doubts, and so many P2P pundits are full of hot air, vague notions, and fuzzy thinking.
  • Think up some totally nonsensical or obvious "fact". Dress it up in some fancy language that may or may not mean what it sounds like ("the opportunity cost of network construction", indeed) and put it in some short clear paragraphs. Genius. Much better than the Urban Existential persona--and much MUCH better than the completely obvious Heidi Wall.
    --
    Non-meta-modded "Overrated" mods are killing Slashdot
  • For a single host, the mail server can almost be a no-brainer. It has to only accept mail addressed to itself and permit no relaying at all. It should not be too difficult to provide such a mail server which needs no user configuration at all - reading the local hostname (which would be the only one for which mail would be accepted) from the OS.
  • Did FidoNet not (and probably still does) run on a similar line. Each system exchanged data with others which were local to it. They in turn exchanged data with other systems which are local to them but not to you and so with the data being carried over large distances in many mutually local hops (with obvious breaks in this mutually local system caused by national borders, oceans etc.)
  • Which is what the original ISPs used to do. The provision of additional content/value came later.

    That's kind of true. It just used to be that dail-in and receive content people were different people than the ISP. It was really a very brief period of time when the no-frills Mom-and-pop ISP really amounted to anything. Before that your ISP was your employer or it was AOL/Compuserv and now with the real ISPs going away we're left with your connection being through something like AOL/AT&T/MSN and so forth where you're paying for their packaged content plus a TCP/IP connection to the world at large.
    _____________

  • Do you live outside the United States?

    Equal speech is not the same as free speech.

    Equal Speech just means that everyone can say what everyone else can say.

    that does not guarantee a persons right to say what he or she wants when they want it(which is freedom of speech)

    I could say that no one can say the word "Cat" and eveyone would have equal speech because no one can say "Cat".

    sorry just a bit nit-pickie :)
    otherwise, the rest of the statment is good.

    -------------------
  • I can't seeing a advertisement form my local broadband providers saying "Buy our services and be able to check your email in 2.6 seconds". Most of them are about being able to having streaming media, downloading music and movies, and killing the world wide wait (WWW). What kind of behavior did they expect?
  • Many of the readers of this article seem to forget that there are natural resources involved with additional bandwidth. These natural resources take the form of either glass (fiber) or copper running across the country.

    The fact is, bandwidth itself doesn't cost money. That's why it's free to transfer as much as you want between those 100baseT boxes on your LAN. The cabling and the hub cost you $60, and that's that. That isn't the case, however, for cross-country bandwidth.

    The fact is, those fiber digs cost some serious money. The cross-country transport (or transit) providers want to make their money back from the investment of actually laying or leasing cable across the country. Those big pipes cost a lot of money. That's why the majors charge $400 - $500 per megabit in transit costs.

    That's right -- it likely costs your ISP $400 - $500 for bandwidth. The public has a distorted perception of how much bandwidth costs simply because of the VC-fueled fire-sale pricing broadband has had. Think broadband is too expensive? The simple fact is that the CLEC that hooked you up with DSL spent $600-$900 to get you running (no, that's not an exaggeration or typo). Then, they have to hope that you don't run a server on your connection, because if you constantly have bandwidth flowing across the connection, it'll cost them more than it costs you.

    That's the danger of peer-to-peer -- it's customers setting up their own servers, sometimes (as in the case of Napster) inadvertently. It only nics the dial-up economics slightly -- the article's mostly hype. But for broadband, P2P is a huge problem.

    The fact is, the nation's broadband customers are going to have to choose between one of the following:

    1) Significant rate hikes. Right now, it takes 5 years to start making money on a DSL subscriber. That won't last. The article's wrong.
    2) Significant limitations. Cutting people off when they do certain things, or having an extreme backend cap on bandwidth. This is the "FAPping" done by Hughes on DirecTV subscribers.
    3) Poor service. Simply not purchasing enough bandwidth for customers.

    The fact is, it's going to be number three. With scenarios 1 and 2, customers simply won't sign up, or will complain loudly. If everyone has the same sub-par service, though, people will just choose what's convenient. The baby bell and cable companies, are well known for charging moderate rates for severely sub-par service. This has put a squeeze on the good guys, the independent ISPs, leaving markets with a near-monopoly of unsatisfactory service.

    It's already happening.

    Eschatfische.

  • It seems to me that the source of this information needs to be considered. Todd Spangler of Interactive Week? This guy writes for ZDNet and probably wouldn't know a true daemon if it bit him in the ass.
    More than likely this guy was talking to some buddy at an ISP who was tossing around some geekspeek and saw an angle for a story. Consider his thoughts on the subject of modem to user ratios; he brings up the old 40:1 ratio. How does that relate at all to cable, DSL, or Tx lines, the users who will be the ones that really change the amount of bandwidth that ISPs require, and their impact on pricing?
    Go back to writing such articles as "Feeling the Spam" and leave the tech stuff to people who know what they are talking about.
  • the idea that the connection of a particular OS to the internet has adversely affected it from a non-technical standpoint is absurd

    Really? You don't think the tools people design and use reflect their philosophy? An odd view indeed. I'm willing to bet any decent anthropologist or philosopher will dispute that.

    What the hell is an "internet supporting OS"?

    Remember passing around floppies that contained "winsock.dll"? If it doesn't support IP out of the box, it doesn't support the internet. Grafting it on after realizing you're 10 years behind the competition doesn't count.

    In addition, I didn't realize it was beneficial to the net for every machine to have every possible internet-related daemon/service running by default. Actually, I thought the exact opposite was true...

    In our post-September times, you are right. Once, though, when people didn't have to worry about the millions of skript kiddies, thieves, and other hooligans on the net (that, or maybe everyone on the net was a hooligan of some sort), most machines ran every service they could support. Even rexd.

  • True, napster users suck up a lot of bandwidth but that is one app. DCC Chatting with mIRC is P2P but doesn't consume anywhere near the True, napster users suck up a lot of bandwidth but that is one app. DCC Chatting with mIRC is P2P but doesn't consume anywhere near the bandwidth that napster does.

    There are also plenty of server based apps that use as much, if not more resources, streaming audio webcasts of radio shows for example.

    My point was P2P is just another connection ...what happens once two machines are connected, who knows. It cannot be said that P2P apps increase the cost of anything.

  • A comment from the land where the 56kbps modem still rules supreme (Australia). The ISP I use has a very straightforward pricing plan - you pay a flat fee per calendar month, and you get up to 400MB total traffic in that month. If you go over that, you pay a certain amount per megabyte. This penalises people who download lots of stuff, and particularly penalises those people who regard ISP bandwidth as a limitless resource (which, in Australia, it isn't). Now, so far, I've downloaded mail for two different email accounts, read about half a dozen newsgroups (three of which are about 200 messages per day), do a bit of web-surfing (checking out about three online comics, and then rambling randomly), and also chat on IRC three nights a week without using up my quota.

    Oh, and the reason we haven't got ADSL here is mainly because Tel$tra, gods bless their hearts, have decided not to roll it out - after all, they're making so much money from the phone lines, why provide us plebs with something more effective and more expensive.

    Meg Thornton.
  • Running your own DNS and all that is not a problem, DHCP or not. Use an offsite DNS provider.
  • by dachshund ( 300733 ) on Tuesday February 27, 2001 @05:08AM (#399085)
    I dont see how me having some sort of P2P running off my DSL line is any different than me being on IRC with files offered

    That's exactly what P2P is. And ISPs don't like it. They want you to take your files off of the pretty web pages, many of which (in their master-plan) will have local caching servers ala Akamai. If you look at the TOS for most cable-modem and residential DSL providers, they specifically say "don't operate a server or file sharing program." The unpredictable downstream and upstream bandwidth that P2P generates might eventually require them to spend more money.

    This is really too bad for them. As long as people have these relatively powerful machines hooked up to the net, it's inevitable that they're going to use them for more than one-way downloads and web surfing. ISPs will adjust.

  • ...At least when P2P means distributed data storage. My logic is simple if you take these examples:

    1) Everyone has a high downstream:upstream ratio.
    As a logical consequence, data is stored centrally on large server with large backbones. Everyone downloads from one or several major sites, upstream is used pretty much just for packet ACKs. The major servers need large amounts of long distance bandwidth, as do the ISPs the clients are using. This is the current state of affairs.

    2) Everyone has lots of upstream (perhaps 1:1) and everyone has P2P software.
    This means that there is a good chance a local peer will have a copy of data you want. The transfer would then take place over a small number of hops (perhaps they're even on the same ISP!) Long distance bandwidth is thus reduced as most transfers can be handled internally to the ISP.

    The question is, which of examples 1 and 2 is the cheapest? Long distance links are expensive, and example 1 needs lots of bandwidth down long distances. Example 2 may require more bandwidth in total, but this is over short distances.

    Well, you might say that "obviously less total bandwidth is cheaper." But think about the typical home network setup - 100mbit internal (local), 56kbit external (long distance). It's cheap because the high bandwidth carrying is short distance, and you'll be downloading once and copying between local computers.

    My argument is that example 2 scales all the way to global, and the combination of P2P and greater upstream will in fact lower the cost to an ISP eventually.

  • First off, I am hearing P2P referred to as a new technology?!?, This is not new. P2P applications have been around for a long, long time.

    That aside I don't really see how a growth in the number and usage of p2p apps will cause any substantial growth in the cost to ISPS. Greater usage of these types of applications does not indicate an inherent rise in the number of people online at any given time, or in the amount of time they spend there.

    It's really just a question of where they point the bytes, to a web server, or to a desktop in a barn in Iowa. An ISP is really nothing more than a bit courier. It really makes no difference to them where the data goes to and comes from. They just care that you fork over your twenty bucks a month.

    There is also the growing shift to direct broadband providers to consider. With this type of connection the users are always connected so time spent online is not really an issue.

    Well that's my two cents

  • One thing I've noticed after over a year of lurking on /., and only recently starting to post regularly, is that readers here tend to be inclined to buy expensive toys. Now why is it a concern all of a sudden, if Internet access (Particularly broadband) becomes more expensive? Any time a 'Neat New Toy' comes around, it seems that the only mention it gets on Slashdot is if it's something expensive and/or hard to find. As a budget-conscious (Read: Damn Poor) geek, the only thing on Slashdot that's been mentioned lately, is the Dreamcast hardware going to bargain-basement levels. And I bought one of those right-quick.

    To see an article like this, showing concern about the already low price of Internet access seems kind of like whining. I shell out $20 a month for less than a thirtieth of the bandwidth an average DSL or cable user gets, plus the expense of the dedicated phoneline, another $20. For my $40, I get busy signals, dropped lines, and 3.4K downloads when I'm lucky.

    Prices will rise, that's a fact right now. There are more users, spending more time online, that's for certain. But anyone trying to make a huge deal about it just hasn't thought about the economics of it all.

  • Two word quantity and accesibility

    How many megabytes of files do you transfer when you browse the web? Downloading the kind of files available on the Gnutella network can average 400MB(ytes) each.

    Also do you think Joe Bloggs regularly goes to IRC to swap files? Course not it requires a bit of talent. Try using bearshare (www.bearshare.com) and see how easy it is for a newbie.

  • Oh sure, I can easily see the next lot of set-top boxes running mailer daemons, but there's two problems:

    1. Where does the namespace come from? I'm willing to bet we once again end up with the IP6 model and go geographic. This means, you end up with a street address as a domain name. No biggie, it'll probably happen once the judges make sure no domain names are ever recycled again anyway. And of course, doing this requires one hell of a dns server heirarchy. You'd probably find it'd have to be the same dumb heirarchy again.
    2. Actually, I've forgotten the second problem:)
  • It seems to me that ISP's are becoming redundant. They have an almost 19th century business model, one based on dominating a segment of the market. Well, that kind of thinking doesn't work on the interent, and it is the reason why the biggies - AOL for example - will eventually collapse.

    The model is actually a few thousand years older than that, and is just as effective today as ever. And, sorry to say, AOL is not anywhere near "collapse" . . . like it or not, they're still fantastically successful.

    I forsee that when broadband comes, the telecoms companies will be in the most powerful position.

    They already are. The ISPs are just (necessary) middlemen.

    With broadband and more powerful computers, there is no reason why the average users computer cannot connect to the internet directly, and bypass the ISP model entirely.

    Mail servers, DNS servers, address blocks, and about 400 other reasons come to mind.

    ISP's, with their ridiculous dreams of providing direct media content and products, will die and be replaced by a devolved and more democratic network.

    On one side, there are a lot of ISPs that offer low rates and no content. On the other side, a lot of people want the content offered by companies such as AOL.

    Communicative equality in America, land of free and equal speech, could be reborn through the arrival of P2P, IMHO.

    Nice sentiment, I suppose, but comletely unrealistic. Not to mention that the P2P concept has nothing to do with Internet connection service.

  • by Edge ( 640 ) on Tuesday February 27, 2001 @05:39AM (#399092)
    People are missing the point here. So P2P will cause a surge in bandwidth usage and connect times? Look at the Internet. It is evolving. Look at bandwidth. Just as any other technology evolves and expands, so does the amount of bandwidth available. Five years ago, how many people had 512Kbps(+) pipes coming into their bedrooms. Not many. Now, cable modem and DSL connections are becoming the norm. Analog connections are certainly still the majority, but look around at the advertisements. Bandwidth is big business. People are demanding more of it. Usage has already been increasing radically. The number of people getting on the 'net is increasing at a phenomenal pace. Just like anything else, the more you sell, the cheaper it gets. The price per amount of bandwidth has plummeted over the years. The consumer costs are not going to go up. With usage and volume ever increasing, the worst that could happen is prices may not fall as radically as they have in the past. Or, you may see less of the $7 Internet access offers. BFD. People will still be willing to pay their $50 a month for their cable modems. The majority of the people out there have jobs and are not savvy enough to schedule downloads all day while they're at work. It's pretty safe to assume that your grandma will not be on Gnutella downloading the latest ICP track. But she'll still like that cable modem because it makes her marthastuart.com pop up quicker..

    It'll all be okay. Really.
  • It's not just the bandwidth, but also the modem banks that are affected. Before, that ISP with 500 customers could get away with only needing 100 modems, since on average 50-60 of those customers would be on (with a little buffer for peak times). If now 300 are on all the time, the ISP needs three times as many modems to support that many users just dialing in. I know that the ISP I did a little work at really hated the idea that any of their customers might get a busy signal when they tried to connect.

    But I think this means that ISPs will need to change their business models. If a higher percentage of their customers will be using their service at any given time, they will have to support that -- or their customers will go to someone who can. This probably means higher costs, but the idea that only 10% of the customers will be on may be a thing of the past. Service providers change to meet the customers needs; customers don't change to meet the providers needs.

  • P2P reduces the bandwidth that any one site must have available. If Napster had to serve all those MP3s it'd never have gotten big. But with P2P, each user serves up only a few songs at once, while downloading for themselves.

    P2P makes sense, but only in a case where there are many things being downloaded and any given user will have some of them.

    It wouldn't work for the LoTR trailer because everyone wanted to get it, and nobody had it. But if you applied it to movie trailers in general, then it would start to work.

    The problem with it is that it relies on the users having content that other people want. This is okay in small communities, but if the you didn't recognize any MP3s on someone's drive, you probably wouldn't bother downloading any. The fact that you're trading a well-known group of songs makes people more willing to trust in the worth of the content.
  • The idea of P2P has been around a long time -- the whole Internet is P2P. These recent apps have just made it easier for common users to run file servers on their machines.

    Some advantages of P2P:
    • Decentralized -- no single point of failure (in true P2P, things like Napster have a central server that can fail, but Gnutella doesn't)
    • Distributed -- don't have lots of people all hitting the same resource at the same time
    • More information -- the contributions of each person are added to the network, not just some central server
    • More storage space -- probably Exabytes or Petabytes of storage space available on a large P2P network. Try getting that in one box :)

    And there's probably more.

    P2P is a good thing, and we've been using it forever (well, in Internet time :). These new apps are just a way to make it easier to run a server on your computer and to browse others'. They make it so the Internet is being used as it was intended to be used.
  • if there's a reason for a price increase, i'm sure that isp's will find a way to make the hike 'justified'.
  • We're already seeing a flavor of "p2p" in all this distributed computing stuff (Seti@home, Distributed.net, Folding@home), as well as instant messaging. Of course it wasn't called "p2p" until the p2p hype hit (usually when hype about some amazing new paradigm hits, everything on earth is recast under that paradigm...hell, Slashdot is p2p!). The legitimate uses so far have been harnessing idle computer resources, communications, and storing stuff other people don't want you to store (the very nature of which makes such a system difficult to use). But if I inhale on the P2P pipe, I can envision many different uses (many of which were promised by "agents").
  • Actually, in most places on this planet you pay gas tax, which usually goes right in to the transportation department for road maintenance. So the more gas you use the more you pay to maintain roads.
  • Three reasons why this won't happen:

    1. Cost of backbone connectivity per user keeps getting cheaper (all that OC192/768 equipment on DWDM fiber makes adding capacity way cheap)

    2. It's way too expensive to measure & bill this stuff. Maybe you can measure it - but just try to link it to a biller. Major pain in the butt - and for what? A few bucks? Not worth the investment.

    3. P2P is a killer app for DSL. Why do you think Speakeasy was giving away Rios? It doesn't make sense to go after a leading reason people are buying your service in the first place...

  • Go off giving any credibility to that shit rag fish wrapper wannabe a trade magazine. They do not engage in any kind of responslible reporting. They only want to stir the pot and provoke responses from anyone not caring if the response is positive or negative. Their objective is to generate traffic for the number crunchers. "Look we had X emails last month from our readers!", not taking into account that 90% of those messages were flames for their crappy reporting.
    You have to remember this is an IDG company they are in the marketing business, pro spam, pro opt out, anti-privacy, sensationalist bastards from my point of view. I have not renewed my subscription and will find other reading material for the crapper.
  • Over my cable modem connection I can currently download at a max of Two Megabytes Per Second (yes I just said MegaBytes, I am very VERY happy with my connection speed!) but I can only upload at a speed of 15KiloBytes per second.

    Now then, the upload cap is actualy doing its job of keeping me from sucking massivly large amounts of Bandwidth from the local cable modem network, so I really can't complain about it, seeing as how it was designed to do somthing and it is doing that. In reality it is only an inconvience to people who are trying to download from my P2P server that I am running (defintly not Napster of Gnutella to say the least).

    I can download an easy 10GigaBytes a week, which happens to be ALOT of traffic, heh:) I've actualy heard of some ISP's inserting a 2GigaByte per month Cap, a cap would I could easily bypass in a day. But in general, downstream traffic on a cable modem connection is not really a problem for the ISP or the user. Since I am mostly an off-hours surfer anyways, I am not using up all that much bandwidth that would normaly go to other users, and when other users want to use up that bandwidth, it is going to be used up. That it is the intresting part of not having a downstream bandwidth cap.

    Mathmaticaly, Assuming that the tasks being performed are enough to use up all the avaible bandwidth, the same amount of Bandwidth is going to be used up no matter if one certin user is sucking using gobs of bandwidth or not. You see, the bandwidth is GOING to be used, so, there is no reason to insert a Cap on it, since used it used, mine as well make the customer happy. (yah I know there are some holes in that argument when you only have a few users on, but trust me, everybody in their right minds wants a cable modem if they can't get a T1/T3/DSL, so there is not a problem with the user numbers).

    Well. . . upstream bandwidth doesn't work the exact same way. If you have BOTH upstream and downstream bandwidth uncapped, all of a sudden you can have a few users using up alot of Upstream bandwidth and taking that bandwidth away from the poor downstream people. The way that most services work (to the demands of the customers I might add) you are guarnteed so much upstream AND downstream bandwidth. In fact I am promised the I *WILL GET* that 15KBp/s, it is a reserved amount that is assigned to me no matter what. If all shit breaks loose and the network is flooded tomarrow, I am still promised the 15KBp/s. If you uncap that bandwidth and make it unlimited, all of a sudden you can no longer promise that the bandwidth will be there. Since the upstream bandwidth will then have to compete with the downstream bandwidth (granted it does so right now, but it is in such a controlled manner that they can almost be considered to be seperate) and I can no longer be promised either the upstream OR the downstream bandwidth that I am currently guarnteed.

    Just sharing the bandwidth out still presents a problem. Right now the maxium number of real-life do able connections on a Cable Modem is about 4 (depending on your Area, some people get 50KBp/s on their cable modem from their ISP, but that is rare, yet horribley fun:). Anything more then that and you are splitting your bandwidth too many ways. Now then, 4 connections @ a sum total of 15KBp/s max is not exactly stressing the network any. But 16 connections at a theoretical max of and all of a sudden you will have more people setting up servers. Even though the bandwidth is shared equaly amonst all of the connections, you now have MORE CONNECTIONS. That is what Bandwidth caps are REALLY there for. They either discourge people from running servers, or they encourage people to run a server with a very low maxiumum user limit.

    Now then, you could lift the bandwidth cap and place a connection limit cap. And in fact that WOULD theoreticaly work, and I have seen ISP's that do that. It actualy still ensures that X number of users, you can easily mathmaticaly divide how much bandwidth each user is going to be using max (BandwidthTotal/8x). But taking a second look at this, it, well, sucks.

    You see, I can typicaly have 16 or more windows open at the same time loading different web pages. Put this in conjunction with the two or three files I can be downloading at an excess of 300KBp/s, and granted the increased upload speed would be nice, but shit, I would be reaching my connection limit within a matter of minutes of turning on my computer!

    What is more, in any sort of large game that is not server-centric, a might have to have 16 connections "established" to other users. These will not be active connections, but they will have to stay open at all times to be ready to recieve bursts of data.

    Not to mention that Windows9x/2000 have a horrible habit of leaving a dozen or so connections open long after you have closed their originating application or browser window. Not a very good thing, users would find themselves having to reset all the time, just to get their internet connection open!

    Now then, what would work? After all, I do want more upstream bandwidth, but I DO NOT want unlimited upstream bandwidth for myself, because I know that since everybody else would get it too, the network would become an absolute piece of shit.

    I would perfer to be able to *PAY* a few extra dollars a month to have my BandWidth cap increased. While this would still act as an excellent deterint to having users run their own servers, it will still allow for some user to run their own responsable servers (a 50KBp/s or 64KBp/s bandwidth cap is what I am thinking of) and would also double as a "Gamers Package" that would have a definte improvement on internet performance during games.

    How much more would I be willing to pay for that service? I am thinking of an extra $4-$5 a month, nothing too much, and very cheap. Remember that it is just a deterint to some user setting up a server in their spare time and taking up all of the networks bandwidth.
  • by The Man ( 684 ) on Tuesday February 27, 2001 @06:53AM (#399102) Homepage
    The Internet has *always* been a peer-to-peer network, from day one. Only for a brief period of about 5 years in the mid-90s when a large number of non-wealthy new users came on board and were forced to live without bandwidth was this not the case. If you look at the design of the protocols, and the way the internet actually works, it's fairly obvious to me that it was always fundamentally designed to be peer-to-peer. Not in the hip new let's-steal-music way, but in the sense that every machine would function as both a client and a server. Remember when every machine ran telnetd? When there were no firewalls? When "PPP" meant going to the bathroom? This ain't new, folks. But I did like it better when peer-to-peer meant the Internet was a community, not just a place to steal stuff and run DoS attacks on irc servers. The model didn't really break until people started being allowed on the Internet without an Internet-supporting OS. They never joined the community, just babbled senselessly and joined AOL in droves. It's amazing how well their OS reflects their attitude and behaviour - no services provided, but a client for everything. Take, take, take, give nothing. That's peer-to-peer? Fuck this noise.
  • Um I have to disagree with you there...I don't find MP3s at all objectionable. I am not interested in a flame war, I just disagree, I don't think the Napster concept is all that wrong.
  • by Urban Existentialist ( 307726 ) on Tuesday February 27, 2001 @04:39AM (#399104) Homepage
    It seems to me that ISP's are becoming redundant. They have an almost 19th century business model, one based on dominating a segment of the market. Well, that kind of thinking doesn't work on the interent, and it is the reason why the biggies - AOL for example - will eventually collapse.

    I forsee that when broadband comes, the telecoms companies will be in the most powerful position. They own the fibre that travels into the consumers home. This gives them the power. ISP's are a temporary phenomenon.

    With broadband and more powerful computers, there is no reason why the average users computer cannot connect to the internet directly, and bypass the ISP model entirely.

    ISP's, with their ridiculous dreams of providing direct media content and products, will die and be replaced by a devolved and more democratic network.

    It is better that the communications and telecoms companies have the power than ISP's, for ISP's aim to pump media and ideas into the home whereas telecoms companies are only interested in the almighty dollar.

    Communicative equality in America, land of free and equal speech, could be reborn through the arrival of P2P, IMHO.

    You know exactly what to do-
    Your kiss, your fingers on my thigh-

  • All ISPs should be interested in running their own set of "P2P servers" themselves and they'll come out ahead without needing to raise prices.

    A perfect example is mojo nation [mojonation.net]: if an ISP runs several boxes with mojo brokers covering a large or full part of the mojo data address space then both the ISP and its users will benefit. Users mojo brokers will naturally prefer the ISPs mojo brokers because they have a much lower latency and a higher reliability compared to brokers elsewhere on the net (outside of the ISPs internal networks). That means less overall outside/internet bandwidth usage directly by the users due to the caching the ISPs brokers effectively perform. It's a very large content neutral distributed caching system. In the end is saves the ISP money!

    People misunderstand "P2P" in its buzzword hype. ISPs are Peers too!
  • Have you heard of a cache?
  • Here in Europe, most people still have to pay by time spent online -- we pay for what we use.

    Some ISP's have imposed maximum times (you get disconnected after an hour, and have to redial, annoying!)
    The people who are online are downloading much more data than a few years ago, and this is clogging up the bandwidth. You can really notice this. Friends of mine jam up AOL lines all day downloading MP3's -- it's not fair to other users and they are getting more than their money's worth.

    However, with DSL and cable, things don't work the same.

    Charging my the meg would solve the problem (and create lots more) for permanent connections rather than flat charges. But, for dial-up this would not work very well since a double-charge could be imposed.

    In my opinion, access charges should be high for "unlimited" plans, while they could be based upon data transfer with lower monthly charges.

  • I read the article, and all I see is the quote that P2P "will break the ISP business model". I dont see how me having some sort of P2P running off my DSL line is any different than me being on IRC with files offered, or surfing, or anything else.
  • The rate that bandwidth is expanding at right now is phenomenal. The case in this part of the world is such that there are people walking around with fibre licenses with no idea what to do with them. The Southern Cross pipe opened up and that's got a combined size of 80Gig right now, I think 160Gig in a year. At least, that's the old stats.

    P2P isn't anything compared to that stuff. Bring on the holographic conferencing!

  • uh, there have been second class non-peer folks on the 'Net for a long, long time. They used to be called "UCCP nodes." When SLIP and then PPP came out, it was a godsend; the folks who weren't at a university or a high-tech company could finally be directly connected to the net (albiet slowly and with high latency). I remember upgrading from UUCP to SLIP and it was the greatest thing ever. They gave me a whole freakin' class C for my 14.4k SLIP connection. I could play MUDs! I guess that actually made it disasterous from a getting-anything-done point of view.

    Burris

  • This should come as no surprise to ISPs as before its consumerisation the Internet was a p2p system, exchanging information between peers.
  • 1. Cost of backbone connectivity per user keeps getting cheaper (all that OC192/768 equipment on DWDM fiber makes adding capacity way cheap)
    sure, bandwidth is getting cheaper but as it gets cheaper and people's connections get faster they want to transfer richer media. First it was just JPG's and GIFs. Now it's MP3s and DiVX is starting up. Next it'll be higher res, less compression. Maybe after that it'll be teledildonics or something. Oh, and don't forget there's many decades of video history that has yet to be ripped and shared.

    It's just like when you buy a new hard drive. Every time I've gotten a new disk in the last 10 years I've though "damn, this drive is so huge I'll never fill it up." but I eventually do, sooner than later.

    Disk space, CPU, and RAM have been increasing by the same ridiculous amounts that bandwidth but we manage to fill them up. A 700mhz G4 is many many orders of magnitude faster than the 68000 in the first Mac but a Mac OSX on a G4 feels slower than an old OS running on the old Mac. That's because software engineers do new, more complex things with the increased available power and memory. Users do new, more bandwidth intensive things given increased bandwidth.

    The complementary law to Moore's law and it's memory/bandwidth equivalents are that usage increases at the same rate. The idea that after a couple more upgrades of the backbone we'll have plenty of excess bandwith for every user to do whatever they want at a low flat rate is just naive.

    What's going to happen is ISP's who have structured pricing on users with intermittent, spiky bandwidth usage are going to keep their bandwidth offerings and prices the same for longer than they would have without people using P2P apps, until their profits are back. Basically they will raise their prices without actually raising prices that consumers pay, just like the way consumer product manufacturers raise prices by giving you less product in the same sized and priced container.

    Burris

  • by qpt ( 319020 ) on Tuesday February 27, 2001 @04:41AM (#399113)
    On the one hand, it seems hardly a matter of debate that if people use more bandwidth, they are going to have to start paying more. It is a well accepted economic principle that a large quantity of a good or service will cost more than a small quantity.

    On the other hand, bandwidth is an odd sort of resource. Unlike coal, iron or wheat, it is wholly manmade. While it is correct to point out that the physically devices such as routers and switches are in fact made of natural resources, this is not the limiting factor of their production.

    Rather, human ability and inginuity is required for the manufacture of IP networks. To build larger networks, effort must be taken from some other task and applied to its construction.

    How does this relate to ISP prices? Quite simply, the opportunity cost of network construction goes down as demand for the service rises. Thus, even though more net bandwidth will be used, the total cost will actually be less than it is now. The tradeoff, though, is that some other good or service will lose manufacturing precedence and increase in price and lose market priority. It is anyone's guess what this good or service may be.

    I suppose we could always hope it is Microsoft :)

    - qpt
  • There are clearly higher costs associated with bigger usage. The reason the initial growth in web usage did not cause cost inflation is that the associated growth of users brought economies of scale.

    There are two separate economic situations here. In the first, usage increased, but vastly increased scale and improved technology brought economies of scale.

    In the second case however, growth is much shallower, and even if their is massive growth, the large size of the existing market means that there is less room for economies.

    In this case, where it is a change in usage patterns more than user numbers, there are certainly implications for ISP costs.

    The cost increases are still, however, likely to be small, simply through competition and improved technology.
    --
  • Then the ISPs will be liable in the same way that Napster and my.mp3.com were liable. I'm sure they'd love that. "Content-neutral" won't save anyone's ass, especially not from overly-litigious conglomerates.

  • This all just wrong. The net should be getting cheaper, and faster. Technology keeps moving forward able to move more data faster all the time. If the Telco's, ISP's, ETC would just take advantage of what they already have we would be in much better shape. 1. A 1.5 SDSL line should be standard equipment if your within the reach of the connection. 2. We hear all the time about all the dark fiber in the information corridor, light that fiber up. 3. Stop installing old outdated equipment, only install the stuff thats capable of providing the latest advances.
  • Exactly. Well said.

    A couple of grumpy observations, because it's only Tuesday and the Man seems to require another four days of our spirits before we're released for the weekend parole:

    1) Competition is drying up here in the midwest, the Ma & Pa's a thing of the past and the utilities grabbing the lion's share of business. Not surprisingly, service (through Qwest, anyway) gets suckier all the time -- service outtages, unknowledgable undertrained underpaid tech support, and an ever more unctious corporate-speak about how good it's all supposed to be. Yes, good for the nascent monopolies.

    2) The Net is truly a test of democracy, as any number of cases and legislation in recent years can attest. One of the key tenets of democratic society is that one must co-exist peaceably with those one can't stand, and the extent to which one does this successfully is a measure of that individual's suitability for citizenship (on one extreme is the criminal, having failed to get along, and on the other is the politician taking graft from everybody, having succeeded in a kind of polymorphously perverse ideal of tolerance). Ok, that said, screw tolerance. I hate the idea that a bunch of #$@%! suburbanites sitting around on their fat asses trading Britney Spears songs via P2P will increase my bill and bog down the Net. Bloody sheep! Television was invented to keep you busy, back to your idiot boxes. ;)
  • My elderly mother pays hundreds of dollars each quarter for the privilege of being able to flush her shit into the local water treatment plant. It never makes headlines, though.
  • There is no way this is going to work. Projections now say that 55% of online users will be using modems in 2004. That pretty much eliminates your claim that the 'Average users computer' can be online without an ISP. You also fail to realize that even broadband providers are ISP's. Anyone who sells you access to the Internet is an ISP. Whether they're also a phone or cable provider is besides the point. As long as there's money to be spent on infrastructure and money to be made by selling access, it's going to be a corporate deal.

    I'm really surprised this didn't get modded as a troll. But it's early. :)

  • Charging by the Mbyte (I believe that the KPacket was the actual unit) is how data services (such as X.25 networks) used to be charged.
  • Actually on a scale of Europe, BT is by _far_ the most expensive, except for I _think_ Germany. They had a graph on BBC news last time Oftel started investigations, and the BT call charge bar was towering above most European countries.

    BT also have lovely practises such as selling your ISDN line to someone else if they run out and you aren't using it. They won't mention this, keep charging you, then claim it was just disconnected and will be reconnected in a few days - while they find some other poor company to steal from.

    Then theres ADSL. Not only is it expensive for an unroutable NAT ip address ($75 per month), but you have to use a USB ADSL box, or pay business rates if you want an ethernet one. Of course this mor or less dooms you to windows.

    And all this before we even look at installation charges, DISCONNECTION charges, high monthly line rental in addition to costly local and even more costly national calls, and the fact that BT are evil anyway (remember the Hyperlinks patent?)

    Hey - thats what happens if you privatise an originally state owned utility... It turns them directly into a monopoly that got a free infrastructure.


    --
  • Which is what the original ISPs used to do. The provision of additional content/value came later.
  • People have a reason to use the net more ergo ISP's need to charge more. That's called not estimating consumer demand correctly and then playing catch up. It's called we the consumer don't need your worthless fucking 'portals' anymore so now you have to charge us to cover the pass through advertising rates that dried up. Let's not wrap this in some grand sociological shroud.
  • The ISPs, wether the big boyz or the mom and pop shops (who have to go through the big boyz wire, cable or fibre,) are starting to count the number of packets they send to your IP address.

    They don't need to do more than count and log. Hit the limit and you'll get a message saying you're cut off. I already have.

    Got a 24kbps modem, you might never hit it. Got DSL, you will probably hit it. The more you surf, the faster you'll get cut off.

    Belong to an active news group, retrieve a lot of files (a typical Smalltalk image is 100MB where I work, that's a lot of MP3s,) you could find yourself cut off from the world real fast.

    Unless you pay (and pay [and pay {and pay}]) for plans with different limits you'll be sucking on a dry pipe.
  • i have DSL (Earthlink) in NYC and i have a dedicated phoneline for it (three phonelines total: one for private voice-only, one for biz fax and voice, one for deezul), Verizon is the only local phoneline service provider in town and they suck, but they are not very expensive: my two 'additional lines' cost me a total of exactly $12.22 per month added to my monthly phoneservice charge of $26.95... Earthlink DSL costs me $34.95 per month... my machines (three) are NAT'ed and up 24/7 damn near 365... i am a bandwidth-sucking fiend AND i have never hit any cutoff point. ain't gonna happen. further, the only time i hit any bottlenecks online is when a site gets slashdotted () nyah! the very idea of per-packet charges are preposterous. fsck that noise.
  • P2P reduces the bandwidth that any one site must have available.
    This is true in a way. However, it must be qualified. First, does it _really_ matter? Yes, bandwidth costs money, but if it is properly located and the scale is sufficiently large (which Napster et.al definetely are) it is quite cheap. The bandwidth demand also scales with consumer demand, hence there are generally ways of getting corresponding revenue to pay those costs. (i.e., banner ads, nominal fees, various marketing, promotion, etc.) Witness the success of servives like Tucows, Cdrom, and various mirroring services. They serve out gigabytes per day of relatively obscure stuff, certainly the same could be done with the more condensed demand for popular music (not just "pop", but anything that is popular enough to be signed up with a label and be listened to-- as opposed the thousands of random sharewhare programs and such). Second, Napster's databases are not all that cheap, they are an additional cost that cannot be ignored. Third, the cost of bandwidth is just one element amongst many. One of the biggest problems with Napster, and especially with more P2P oriented services, is that the searching process can be awefully cumbersome, finding the optimal server is often tough, finding quality mp3s is tough, and many other similar issues. If piracy were legitimate, centralized servers could easily solve all of these concerns.

    If Napster had to serve all those MP3s it'd never have gotten big.
    Then how do you explain the success of centralized techniques that serve up equally large files? If Napster is such an efficient model, why is it that this model is pretty much exclusively confined piracy? From an economic standpoint, you would fully expect the legit methods to flood the so-called P2P technology. It's not as if Napster's technology is that tough to master.

    P2P makes sense, but only in a case where there are many things being downloaded and any given user will have some of them.
    I really don't think it does, with current applications at least, except in the case of this kind of piracy. Though I've covered this argument above, let me add one other point. Napster is not about cheaper costs on the aggregate, the only way you _might_ say the costs are cheaper is if you count the offsetting of costs to third parties as being nothing. In other words, Napster benefits because a lot of the sharing individuals are on high bandwidth connections and do not have to pay an amount commensurate with the amount of bandwidth they consume. But this does not mean in any way that it is more efficient, just that at present time, the way things are currently configured, it sorta works. The load of the inter and intra-nets are increased relative to highly centralized methods, particularly relative to smart mirroring, hierarchical distribution, and Akamia-like technologies. Rather than downloading one or two hops off of mae-east (in the highly centralized example), you pull from some schmoes college dorm, who could be anywhere in the United States (or worse, the world). That is many more hops, less direct, and does not benefit from the economies of scale. It also depends on the "good will" (if you could call it that) of the file sharer, which I believe might well be limited to ignorance, a certain temporary sense of "community" in the piracy community, and other similar issues.

    I'd also like to add that Napster, ethics and law aside, has the benefit of ignoring the issue of value creation and payment, because they're simply not responsible for having to collect or generate revenues. In other words, it's easy not to have banner ads, forms, or what have you, not just because of the different cost structure of Napster's overall bandwidth cost, but because they're not generating these songs that people find of value, nor are they supporting the generation of this value. In essence, it's an artificial, and ultimately temporary, situation. Various parties are paying for Napster's actual costs, a lot of it doesn't have to do with economy of bandwidth and/or storage at all. If Napster were put in the situation of having to actual create value, some of actual revenue creation would be a given. Once that is in place, it is hard to argue that centralized servers' tacking on of nominal bandwidth costs is much of an issue, especially when taken in light of the benefits that more traditional,client-server, technology is capable of bestowing on the user.

  • For a while, I thought that peer to peer was good. But then I got a CD burner, and some of my freinds wanted to trade mp3's. I figured "why not?" and we did. The speed of the whole thing was amazing.

    I've started to do this over mail, and now have some 45 gigs of mp3's. A modest collection, considering some people's 350 gig+ collection. (i'd love to see the RIAA make them pay the 250,000 dollars a pirated album)

    Karma...Police...
    Arrest this man...
    He speaks in numbers,
    He buzzes like a fridge..

  • I don't get it either.
    Increases in usage should suggest to ISPs that an investment in new infrastructure is a prudent move. After all the market just keeps getting stronger. More use should bring a reduction in prices.
    Shouldn't they be more worried about a decline in usage?
    People might move files around? I thought that was what the Internet was for. Hard to browse the WWW without some files moving around.

  • My telco is my ISP, specifically BellSouth FastAccess DSL.

    I went with BellSouth precisely because they own the wire. Sure, Telocity may have better customer service, but who is Telocity gonna call when the line goes bad? Riiiiight.

    And in all honesty, I'm pleased with the service. I ordered the self-install kit and got it running with minimal trouble. I received the kit about 10 days after placing the order, and I was online about 4 hours after receiving the kit. This included making it work with my home network even though BellSouth specifically doesn't support home networking. I just went through the normal install for Internet Connection Sharing (yeah, I'm using Win98, sue me).

    Recently I had a problem with my regular phone connection -- no dial tone. (Oddly, the DSL continued to work throughout this episode.) This required BellSouth to send out a tech, and replace the line running from the pole to my house. He arrived on time, worked like a dog, kept us informed of everything he found and did, and corrected the problem. Telcos are not necessarily evil, though the procedure for putting in the service call was pretty anonymous and bureaucratic. I groaned when they said "we'll check it within 36 hours," but then they actually did.

  • Where I am the telcos already run the show. They are the tier one and two providers. They've bought or started the largest ISPs already, and there's no way the independents could ever compete on cost.

    Where I am (Houston) things were the same way, until my current ISP appeared. This independent ISP has elbowed its way into this market, competing with the big telecos and then some. It is increasingly successful and expanding into new cities. Independent ISPs can still succeed if they have the correct business plan and have the service that people want.

  • My DSL shares happily with my voice fones. You do have to put the filters on all your voice equipment as some (but not all) of them will interfere with the DSL. I have not noticed any bandwidth hit when the voice fone is in use and I regularly see the 1.5Mbps down / 200Kbps up I was promised when I signed up.
  • While it might be different in other places, in my town we are suffering from the third teleco to lay fibre along the same route in the last 6 months. With the current one, the same contractor is digging up the same roads and paths which they themselves dug up for the previous teleco less than a month before. How much money could be saved if for the "popular routes" just one trench was dug and all of the telecos laid their fibre in it?
  • As long as AOL is brainless to use and install, my mother, sister, and brother will remain customers. Easy to use email and browsing. Easy to access forums and 'sites'.

    More than enough for most people and still a bargain at $21 a month, or whatever they pay.

  • Emphasis mine:

    The idea that after a couple more upgrades of the backbone we'll have plenty of excess bandwith for every user to do whatever they want at a low flat rate is just naive.

    The key word is "low." Customers always prefer flat rates. You're right in your subsequent comment, that ISPs will charge slightly higher fees due to the increased traffic per user - but to suggest that the ISP biz will suddenly move to usage-based pricing, against all trends to the contrary, is incorrect.

    People have been suggesting this for years, usually in paranoid terms. Now as before, we have nothing to worry about.

  • ...with broadband is that the telcos have been moving the goalposts since they've worked out that (shock horror) a broadband customer might actually use quite a chunk of bandwidth. One provider now has a policy of kicking off any customers using 10 times the AVERAGE amount.

    This, of course, means that the average amount drops each billing period. Mathematically cute, but it's certainly stopped me signing up. When I use any P2P services, I do so on a slow (but free) dialup line in my sleep.

    Buckets,

    pompomtom
  • by Anonymous Coward
    Hi All,

    What costs an ISP more, a customer transfering X amount of data from a remote web site or transfering X amount of data from another customer of the same ISP? Answer: the first option makes use of the ISP's upstream connection, so it costs more. Peer-to-peer is Good for ISPs because it means that data is obtained locally.

    Really.

    P.
  • by bonoboy ( 98001 ) on Tuesday February 27, 2001 @04:50AM (#399137) Homepage Journal

    Interesting thoughts. But it's a basic anarchy that will never happen. Your model seems to require everyone running their own mail server, for one. Can you see people doing that? Your model also seems to follow some ideas which IPv6 might. It would have to, because people all being handed their own address blocks is the only way you can sidestep content providers and ISPs. What do you think your plan would do to the routing tables without Ip6? It doesn't make your idea any less valid, we'd need a strictly heirarchical system. That requires some very decent top-down organisation of the address space. Geographic regulation would have to come into play. Not a big deal perhaps.

    Bandwidth always costs money. Companies want to get you using their content so that you may never know the rest of the Internet exists. The Walled Garden approach is going to be a big part of the IP networks of the future, believe me. I'm not talking your Mom & Pop ISP here, I'm talking about large fibre networks with cable television services running. This is dedicated content, and has nothing to do with your anarchy model.

    Where I am the telcos already run the show. They are the tier one and two providers. They've bought or started the largest ISPs already, and there's no way the independents could ever compete on cost.

    Anyway, just some thoughts. The point being that it seems a nice comment with very little thought behind what an 'ISP' is these days. Anyone with an upstream connection is an ISP. not just some guys with modem banks.

  • ==
    The model didn't really break until people started being allowed on the Internet without an Internet-supporting OS. They never joined the community, just babbled senselessly and joined AOL in droves. It's amazing how well their OS reflects their attitude and behaviour - no services provided, but a client for everything. Take, take, take, give nothing. That's peer-to-peer? Fuck this noise.
    ==

    This is pathetic and modded up to +5 to boot. I hate windows as much as (or more than) the next guy but the idea that the connection of a particular OS to the internet has adversely affected it from a non-technical standpoint is absurd. What the hell is an "internet supporting OS"? In addition, I didn't realize it was beneficial to the net for every machine to have every possible internet-related daemon/service running by default. Actually, I thought the exact opposite was true...

    maru
  • by Anonymous Coward
    Hehe

    He called you guys Slashers

    Hehe

  • ==
    I forsee that when broadband comes, the telecoms companies will be in the most powerful position. They own the fibre that travels into the consumers home. This gives them the power. ISP's are a temporary phenomenon
    ==

    Uh, telcos own the copper that travels into the consumers' home so what is the holdup on this power-play?

    maru
  • Very, very well put!

    Way too many people believe in the Free Lunch concept. The harsh truth is free meals don't last more than a couple of days, if they're offered at all. My ISP is the local telco (a rural telephone cooperative) and is the only one in the area offering DSL service. 2.2 Mbit ADSL was first offered by them for $60/month. The price soon raised to $90/month. About two months ago they explained that to feed their DSL customers alone was costing them 2x what they were bringing in from monthly charges. Rather than further increase the monthly cost, the ISP has chosen to no longer have any guarantees of thruput as their current customer base is already saturating one OC3 and one T3 circuit. Their monthly cost for those circuits to Sprintlink and Cable & Wireless, as well as their equipment upkeep and tech support has been to too high, even with hundreds of customers.

    The simple fact is, the per-customer cost to them is higher than what they can reasonably charge without sacrificing service. They're not venture capital funded, they're a local telephone cooperative and cannot afford to bleed that much money.

    To answer a FAQ, yes, they do offer lower-thruput DSL services at much lower costs. 384/128 Kbit ADSL is offered for $15/month + equipment rental or purchase.
  • by Kefaa ( 76147 ) on Tuesday February 27, 2001 @04:51AM (#399142)
    In many areas the local monopolies have driven competition right back out the door. I get my DSL through Verizon the ONLY game in town. While they do have increased costs for the benefit of always on, or nearly always on connections, you will have to forgive me if I do not feel they are losing money on this deal. Aside from most people shutting down their connections, ISP can/do enable software shutdown/wakeup so a dead connection is truly a zero resource user.

    The price "scare" is very similar to what the cable companies did a while back. First mention you have to have a large increase [Sorry the price is going up $20/month]. Let everyone get outraged at the ridiculous increases and get the utilities commission involved. Then "relent" to a meager increase of $8. Everyone thinks they win.

    --The problem is not that most people are sheep, it is that most people think they are wolves.
  • You do have to put the filters on all your voice equipment as some (but not all) of them will interfere with the DSL.

    Or do as I did... my POTS line come into a wiring closet in my home and directly into the DSL "modem". The filtered output from the modem then feeds the rest of the house. No need for a filter on each phone tap.
  • For an always on connection, are ISP hosted mail and DNS servers really needed? Granted an ISP's mail server is convenient for when your system is down (or not connected) but SMTP was designed for, and works perfectly well with, direct host-to-host transfer.
  • This idea is very pratical and sensible, but how do we convince the web pages to reduce their file sizes? Nothing worse than waiting for a humungo-flash animation to load unexpectedly.

    Metering will really require greater security controls on browsers.

    ----------------------

  • ...and a commendable observation and opinion-rant.
  • "ISPs might be seeing cost increases.."

    These guys already seem to be in a soup. Here's an example. [founderscamp.com]

    High-speed ISPs nationwide have faced many difficulties providing competing DSL service with data competitive local exchange carriers (DLECs) and Baby Bells.

  • Well, that kind of thinking doesn't work on the interent, and it is the reason why the biggies - AOL for example - will eventually collapse.

    Which is why AOL hasn't become one of the most signficant media forces in recorded history and is currently filing bankruptcy.

    Not to disagree with the principles behind your opinion, but the facts available suggest otherwise. AOL's marriage with Time Warner allows them to entirely short-circuit your plan. Time Warner owns huge CATV systems, in effect making AOL a telecomms provider AND an ISP at the same time.

    My guess is that the telecomms companies realize that the residual income is really in providing services over the wire, not just the wire itself and once their technology is capable of providing a video-like signal, they too will enter the content-provider business.

    The real threat here is the lazy American consumer who is interested in consuming entertainment, and an entertainment industry who makes their money acting as gatekeeper between the consumers of entertainment and the entertainers. THEY are the ones who dislike bidirectional broadband (makes anyone a producer, eliminates the middle man).
  • The messages transferred between two different peers in a p2p network is very small, as they communicate with small messages.
  • NTL in the UK (cablemodems) seem a bit more reasonable about this - their TOS simply states no web or ftp servers. Theres of course a clause saying they may add services to the list ;-) Essentially they just want people to be responsible with the upstream that is there.

    Of course, they're doing other good things too. Not only do they undercut the ADSL travesty we have in the UK, their web page explicity states that you can use any type of computer with an ethernet connector and DHCP. More recently, in our area cablemodem sales are on hold - because they know they cant sell any more and maintain the service. Sales should resume in a few months when the network is upgraded - why can't other companies be this sensible? (BT happily took more and more subscribers until the service became totally unusable every single weekend).

    I'm just waiting for the catch to appear - as yet, it hasn't ;-)

    --
  • Ummm nooooo. Aside from some of the other obvious reasons why this wouldn't work pointed out by other replies to your post, can you just imagine a world where all Internet is controlled by the Telco??? Seriously who in this country likes their telco? I've never heard of one that gets high marks for customer service. If we let telco's control all of the Internet connection services then you might kiss P2P good bye since no one will ever be able to connect! Its far more convientent for the average Windoze luser to deal with a smaller ISP (who knows their customer is what keeps them in business) and let them deal with the big corporate provider. Come to think of it I personally would much rather deal with ISPs than the stupid telco.

    Just as an example I tried to get DSL installed a few months ago from the telco, they gave me huge promises on dates of when it would be installed, yadda yadda. That date came and went and different departments started blaming each other for the mix-up. I never got any answers and I never got DSL. Finally I simply canceled my order 2 months after the install date. that was nearly 4 since I'd made the request! Stupid telcos.

    "One World, one Web, one Program" - Microsoft promotional ad

  • This means that there is a good chance a local peer will have a copy of data you want.

    That's true. But at the moment, very few of the major p2p services look too closely at network topology. When this becomes more common, you probably will see ISPs reacting more favorably.

  • I don't see someone who is paying for 'unlimited network access' who actually uses it 24/7 as 'abusing' their service, or being 'unfair' to others.

    Just because my idea of internet use is different than yours doesn't mean I'm 'abusing' it.

    @home loves doing this. They say it's 'hundreds of time faster than dialup' and 'unlimited' and 'always on' and then they freak out if you use too much bandwidth.

  • Goddamn right.

    I figured that p2p was a truly new paradigm, until I read the specs for gnutella. Low and behold, you've got a client and a server glued together, and cleverly called a 'servent.'

    Client. Server. Both running on the same machine. THIS is peer-to-peer? Gee, I guess ftp/ftpd was peer-to-peer back before the term existed then! How about UUCP?

    AOL and Canter&Siegel were the death of community on the internet. Anything claiming 'peer' status after that is a lie.

  • Most ISP's have a business model that assumes certain ratios. For example, no ISP would have one dial-up modem (or port) for every dial-up user they service. Modems and lines are expensive, and ISP's assume that dial-up users won't be online 24 hours a day. In fact, many have SA's that prohibit staying online continuously for that long.

    By the same token, upstream bandwidth is usually just a fraction of the bandwidth they currently have sold to customers. This is because even dedicated high-bandwidth customers don't use all of their bandwidth all of the time. For example, our T1 customers usually only average 200kbps each at any given point in the day. That means we only have to have 1 T1 worth of upstream bandwidth for every 6 downstream T1's sold, and can still provide quality service. If every T1 or DSL customer used all of their bandwidth all of the time, most provider's upstream bandwidth would become congested, and they would have to buy more.

    Generally, bandwidth for an ISP costs more than for downstream users for just this reason. If ISP's have to buy bandwidth at a closer ratio to what they are selling, their price must go up to stay profitable. This is especially true for the smaller ISP's, since they have less margin for error. Just another reason that Mom and Pop ISP's will go away. [slashdot.org]



    THERE IS NO RIGHT OR WRONG, ONLY DIFFERENT PLACES TO STAND -- Death Terry Pratchett's 'Reaper Man'

    "It's politics, and there is no right or wrong... but you are wrong!" -- My girlfriend debating a point.

  • I don't think so. I view SETI@home et. al and IMing as simply outside of the scope of the hype. The SETI@Home-type efforts are really not commercialized or something that the average consumer is interested int. And instant messaging is an old concept that has been around for ages. Either way, it's not part of the hype. The hype of focused around Napster-type technologies and/or GNUtella-type technologies, which presumably bring a lot of people _new_ things that they desire.

    As for slashdot, that's simply ridiculous. Slashdot is no more P2P than any public forum. The only thing that is P2P about it is that the users, the clients, are generating content, but the same can be said for any number of database applications. It's still the same old client-server technology.
  • Short of getting muliple OC192's from the same provider (good luck), there is almost NO WAY to get around having to spend $300 - $500 per megabit for access to a Tier 1 backbone provider (Sprintlink, UUNet, etc).

    Say your ISP is in or near a major city and they can get a T3 or OC3 for $300 per megabit (155 x $300 = $46,500 per month). 155 Mbit, perfectly distributed and not oversold, can feed 103 1.5 Mbit DSL users. However, done that way, it's costing the ISP $450 per user per month. You're not going to see that happen. So, ISPs usually oversell by 10 - 20x, which will drop the per user bandwidth cost to, say, $45 at 10x overselling.

    So, *excluding* their manpower, sysadmins, hardware costs, electricity, heating, cooling, tech support, etc --AND-- given that they can buy bandwidth at $400 per megabit, --AND-- given the gamble that at any moment, only 10% of their DSL users will be maxing out their connection, the ISP stil has to charge at least $45 per month per 1.5 Mbit DSL user. More likely it will cost at least $90 before they even -begin- to bring in a profit.

    Peer to Peer filesharing is a Big Thing and is growing at a huge rate, they're no denying that. With Napster alone many people are finding their unattended PC using almost half of their DSL thruput at any given moment.

    There is no free lunch, your ISP, no matter how well connected, can't create bandwidth from nothing. Regardless if you get 5 Mbit access via your cable modem or can overclock your Pentium 4 to 1.8 GHz.
  • Usage insensitive pricing of Internet access can support market development initiatives, particularly when relatively few players participate, each having made a significant commitment to lease or invest in transmission facilities. With the passage of time, more ISPs have entered the marketplace, often without the need for, or interest in making substantial investments in facilities. Later entrants may serve smaller geographical regions, and may have a deliberate strategy of "free riding" the facilities investment of other operators who still agree to accept traffic at quasi-public interconnection points. Likewise, because end user access to the Internet is typically priced on a low, flat-rated, "All You Can Eat" basis, no facility conservation incentive exists and therefore congestion can readily occur. As congestion threatens to impede quality of service, some ISPs have responded by prioritizing traffic streams, and by varying the price of network access on the basis of the transmission capacity and traffic volume of other ISPs seeking interconnection. This demand-based responsiveness soon might include reserved bandwidth that would provide higher service reliability and quality for a premium price. Resorting to traditional pricing mechanisms means parties causing congestion, or contributing comparatively less to congestion abatement, will incur higher costs of doing business. The responsible parties include smaller ISPs who lack the traffic, subscribership and transmission capacity needed to sustain highly reliable service in the face of increased demand and new Internet applications that require more bandwidth. Requiring payment for access to the facilities of other larger companies constitutes an efficient outcome, but one that likely will impose comparatively higher costs on smaller and rural ISPs and their subscribers.
  • by crucini ( 98210 ) on Tuesday February 27, 2001 @05:22PM (#399164)
    The biggest barrier to consumer broadband has been the myth that you can get high-bandwidth, uncapped service for $40/month. Obviously, the people selling that myth were hoping that consumers would hardly use the bandwidth at all. When that turned out not to be true, profitability was threatened. Universities respond to this by trying to restrict the rights of their captive consumers. The crop of quotes in this article suggests that commercial ISP's, on the contrary, are seeing this as a legitimate usage that should be charged for. Ideally, I'd like to see them charge for transfer, rather than adopting a tiered scheme. But even the tiered scheme is a huge step in the right direction, away from the idea that "people who move lots of bits are troublemakers" towards the idea that "people who move lots of bits are our best customers".
    This could have two wonderful effects. First, mega-corporations might find that it's more profitable to sell consumers transport, and remain content-agnostic than it is to build proprietary lock-in schemes and badly admin'ed caching proxies.
    Second, of course, the US government tends to see only profit-making activities as legitimate. When the ISP's are profiting from p2p, they might serve as a counterbalance to the IP cartel that is currently 'educating' Congress on the evils of p2p.
    Anyhow, charging per GB transferred is by far the healthiest business model because it gives ISPs incentives to upgrade their bandwidth, since pipes now show up as revenue producers to the bean counters. Anyone who sells 'bandwidth', on the other hand, is incentivized to minimize the customer's use of that bandwidth, whether by outages, restrictive AUP's, or other hassles.
  • ISPs will (hopefully) start moving away from the model where they try to provide everything they can on their homepage and will change focus to doing things like hosting e-mail and handing out IP addresses. Basically, the system will just be setup to do the basics and support (believe me, there will be incompetent Winblows users for YEARS to come that don't know how to do internet, and they WILL pay...)

    ALL YOUR BASE ARE BELONG TO US!
  • Interesting point, but I think the real cost of bandwidth is the cost of stringing copper/fiber from point a to point b. And within this cost, I'd guess that there's the relatively small (although actually huge) cost of telecom-style construction (poles, manholes, repeaters, etc) and the relatively large cost of dealing with political barriers. For example, how much do you think you'd have to pay to be allowed to install your own cable on all the telephone poles in a town? I'm guessing more than the entire value of the existing physical plant, because the ILEC and cable TV co. would lobby so hard against you.
    So I tend to think that the cost of routers on each end of the line is a trivial cost. Maybe some RF solution will save us.
  • by crucini ( 98210 ) on Tuesday February 27, 2001 @06:36PM (#399170)
    It's not pathetic - it's true. OS's have a philosophy deeply ingrained, and that affects the OS users. I'm not saying you can't be clueful on Windows or clueluess on Unix, but I will say that on usenet there is at least a mild correlation between luserish behavior and Windows.
    Every time you interact with an OS, it is silently teaching the values of the people who created it. Take an innocuous example from Windows is the tendency to refer to 'your computer'. As in, 'this will install FOO on your computer.' Obviously, if you are logged into (excuse me, onto) the computer, you must 'own' it, right? The idea that more than one person is affected by a system administration issue simply doesn't fit into this mindset.
    Windows is deeply, in it's bones, a PeeCee OS. It is completely built around the pre-network world. The windows user learns that it's normal to be a client. The windows world is permeated with the idea that servers are somehow special, expensive, rare. Windows NT charges extra for a 'server license'. Compaq and Dell make a lot of money selling huge beefy PC servers to NT-using customers. Although many people claim that NT is a resource hog, I also suspect a psychological motivation - NT admins have a need to visually differentiate their servers from PeeCees. I've notices that NT admins had a hard time accepting the fact that an old, cast-off PC became a linux 'server' that was suddenly 'important'. Not because they're against Linux, but because the natural order of things was reversed.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...