(Updated 2.15pm for minor improvements in clarity)
If I were to advertise "Internet access", and sell people an amazing deal featuring unmetered access via a local number, just pay $XX for a year in advance (non refundable), and then the details came in and it was a shell account with Web access provided via Lynx, and email via Elm, would I be in trouble or not?(Updated 2.15pm for minor improvements in clarity)
If I were to advertise "Internet access", and sell people an amazing deal featuring unmetered access via a local number, just pay $XX for a year in advance (non refundable), and then the details came in and it was a shell account with Web access provided via Lynx, and email via Elm, would I be in trouble or not?
The answer is "probably". The Internet is regulated today. It's not immune from general consumer protection laws. But it's also covered by a large number of laws most technical people probably have some concerns about. Arguments for "more regulation" tend to focus on examples of the latter, enforced censorware installation, bans on speech, etc. Rarely proposed are examples of the former.
The shell account example highlights a problem with the limits of the regulation we currently apply. Ok, I can probably not get away with a shell account and no refunds in today's climate, but what about a DSL service that runs at 56kbps? What about one that gives everyone access to the Internet via NAT (hey, don't laugh, I've seen it done.) When commonly DSL and Cable services require either a year's contract (or worse) with heavy disconnection penalties, or a substantial outlay on equipment, and where a customer may not even realise they've been screwed until months afterwards, where is the line drawn?
Why, exactly, should a company sell crippled Internet services? And if one can come up with a good reason for doing so, why should it be able to advertise it using the same terminology as an ISP that doesn't cripple the connections in any way?
Does it help people in general if a large proportion, possibly the majority, of users of Internet connections are using systems that are, essentially, incompatable with perfectly reasonable applications that should work according to the RFCs? Or that software developers have to pile hacks upon hacks to create applications that work despite often arbitrary restrictions on Internet service?
I think we do need real, solid, regulation right now, in terms of the #1 area of creating a reliable, complete, Internet, because I don't think the "Free Market" can be left to its own devices. This is because
- The free market is not holy, in any way. You know all the "Open Source" people who have been ripping on Stallman et al for the last few years for putting ideology above practicality, and saying that the argument for Free Software should focus on practical issues?
Well, similar practicality arguments won me over to the free market as a solution for many issues. The thing is, like Stallman said about the risks of focussing only on practical issues for Free Software, I'm also looking at the Free Market and saying "Well, it's nice, but I'm prepared to consider alternatives if I think they're better for achieving my short term goals", and that's exactly what I'm looking at here. In some areas, the free market is not the right tool for the job. It takes too long and isn't guaranteed to produce the right result.
We can have a rich Internet with many competing applications using a versatile tool, or we can have something where those applications are crippled by an underlying "competitive" infrastructure market. The market seems likely to result in the latter, certainly that's what we're seeing today. It's too complex a system for users to fully understand the implications of every decision an ISP makes, but there's too much power involved, and too many applications that can compete with other businesses operated by companies that own ISPs, for ISPs not to have the incentive make anti-user decisions.
- Relying upon the free market means waiting until everything is in place where an application broken by a common form of Internet crippling is popular enough that it would force ISPs to upgrade their services or lose customers. This may be a chicken and egg situation. How can one develop such applications if any number of arbitrary ports may be blocked on a consumer level Internet connection? If it may even be that an application needs to "listen" to be useful, but cannot because of everything from ISP-imposed NAT to port blocking to absurd terms and conditions?
Why on Earth should we be in a situation where most of us cannot reasonably fathom exactly what the restrictions are until we've actually started using a service?
Most T&Cs I've seen, for example, allow ISPs to arbitrarily restrict access via certain ports. How do we know which ports are being blocked today? More to the point, how do we know which ones will be blocked tomorrow? And what's the deal with the blanket restrictions on "servers", essentially outlawing any application that listens for data rather than pro-actively receives it. This means such innocent activities as managing your own email (often explictly outlawed, as well as generally by the general restrictions on servers) and using VoIP (unless you only ever make outgoing calls
Saying "The free market dictates that such rules will eventually be blown away!" is a nonsense because the free market says nothing of the sort. The market is a complex instrument, not a logical one. It has severe problems when it comes to chicken and egg issues. Infrastructure - and Internet connectivity is an example - has never really been addressed that effectively. As long as enough people are willing to subscribe to a service, even in desperation as a last resort because it's the nearest thing they can get to what they want, there's little incentive for an ISP to improve it.
We suffer spam, for the most part, because most ISPs will not allow people to manage their own email systems. Because of this, there is no market for applications to manage email using the types of effective means that spammers find hard, if not impossible, to by-pass.
I'm not asking for cut-down services to be banned. However, we need something a little positive so that a vendor can say "This will work as long as you're using an Internet connection that's at least 64/64", and ISPs can advertise "1536/256mbits (128/64 minimum) Internet connections" and consumers know what they mean right away. If an ISP wants to block ports, have the service go down every other tuesday, etc, then they should be able to, they just shouldn't be able to use the word "Internet" in their advertising except in the same sense that, say, Gateway, can refer to IBM. Nor would they get a common carrier status, which is something that should be offered to the "packet forwarding" part of an ISP that otherwise obeys the rules.
What I'm looking for is something looking like this:
- ISPs should be required to advertise a bandwidth rating which includes a minimum service level. That means that the 1500/384 service should also include something in any literature about it along the lines of "(128/64 minimum)"
- A reliability commitment of the "Five 9s" type. This should cover the probability that an arbitrary packet, sent at random from any customer, should be transmitted to the outside interface of any other customer of a regulated ISP, where both are safely within their minimum service bandwidth and where their own equipment is configured and working correctly. Note that this "Five 9s" requirement is part of the regulation, it's not another figure for the ISP to determine for themselves and advertise (otherwise the minimum service bandwidth would be bunk.)
- Port blocking should be optional. ISPs should null route all incoming TCP connection request packets by default, but a standardized, secure, protocol to unblock ports should also exist. No discrimination of service beyond bandwidth caps (which should still not be poorer than the minimum service bandwidth) should be employed, and if an ISP wants a common carrier status, they cannot ban customers from running any type of server. (Though discrimination in terms of pricing for private and business use would be reasonable. But the use really would have the be commercial for an ISP to force a customer to use a commercial account, it couldn't just be a "Aha! You're running a server!" rule)
- IPv6 routing with static IPv6 netblocks given to every customer. There's no reason a VoIP router should have to share an IP address with a PC.
- A standardized network connection port, similar to the standardized phone jack. I don't have to configure my phone and modem to work with a BellSouth network with a number of 555-0138. But right now VoIP providers, for example, have to deal with a variety of different protocols that might be used by the ISP router they're connected to, and any number of network configurations for the user's own network. Should it matter that a DSL connection has PPPoE routing the actual packets?
I think that covers it. Everything else, from email to DNS, constitutes services that run over the network, and there's no real need to regulate them. (Most ISPs will provide them anyway, I'm just saying though they can be left unregulated) Those who offer services that do not fit the above categories can still do so, but they have to be a little more careful about what they advertise. That said, there should be legal incentives to companies to jump on board beyond simply being able to use the word "Internet" to advertise the nature of their service and getting a common carrier status. Perhaps the restrictions on "Internet taxes" should only apply to those willing to play, for example.
You'll note the above is all about infrastructure. I don't think there's any legitimate reason to regulate anything outside of infrastructure issues, because once third parties have an environment that they can rely upon to deliver packets, they have offer those services themselves and competition and a free market can actually be effective.
Obviously, this all cannot be imposed overnight, some - such as 3 and 5 require standards to be agreed upon, others - such as 2 - will require ISPs renegotiate connection agreements with their peers and last mile providers. But gradually phasing them in over a period of three years ought to be practical.
These minimum restrictions ultimately will help end users. They will create a common, reliable, infrastructure, making it easier for people to develop new applications and see the full potential of the Internet develop.