Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Communicating Even When the Network Is Down 115

coondoggie writes to mention a NetworkWorld article covering efforts to maintain network connectivity even when the network has holes. Building off of the needs of the military, the end goal is to create a service which will route around network trouble spots and maintain connectivity for users. From the article: "Researchers at BBN Technologies, of Cambridge, Mass., have begun the second phase of a DTN project, funded by $8.7 million from the Department of Defense's Defense Advanced Research Projects Agency (DARPA). Earlier this year, the researchers simulated a 20-node DTN. With each link available just 20% of the time, the network was able to deliver 100% of the packets transmitted." The article is on five small pages, with no option to see a linkable, printable version.
This discussion has been archived. No new comments can be posted.

Communicating Even When the Network Is Down

Comments Filter:
  • Wait a minute... (Score:5, Insightful)

    by J05H ( 5625 ) on Thursday November 16, 2006 @07:17PM (#16877662)
    Wasn't that the point of the original ARPANET? To route around broken parts of the network? BBN was involved in that, too. What, have they been double-billing the DoD this whole time?
    • Re:Wait a minute... (Score:5, Informative)

      by m94mni ( 541438 ) on Thursday November 16, 2006 @07:26PM (#16877742)
      "But all that breaks down when the network ruptures because of repeated disconnections and long delays. BBN has developed a network protocol and code that moves information from node to node as connections become available, and can hold information in persistent storage until a connection is available. " They are solving the case when at each point in time, there is *no* end-to-end path. ARPANET assumes there is at least one path, though the path can vary over time.
    • This is different in that the final destination address might not be known.

      From the article ... In a DTN, messages can be launched from a source node even though the final destination IP address can't be known due to disruptions of name servers or routers.
      • by jfengel ( 409917 )
        So the messages carry, what, domain names? (It's probably in the article, and I did read the first page, but just wasn't going to click four more times in hopes that maybe it was there if I could just ask you.)
        • As I had mentioned a few times in this discussion, there is a link to a "print" version. However, to answer your question, it seems to me that the messages know what direction to go, just not the details of the final destination. The article is more conceptual than nutz-n-boltz, but I'm sure some info digging on the subject of their "dieselnet" will give you the finer technical details.
        • Re: (Score:2, Informative)

          by strstrep ( 879828 )
          They carry endpoint IDs, which achieve a similar functionality to IP addresses and TCP/UDP ports, but are also human-readable. DTN protocols are fairly high-level, so they can do that.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        Yes, a Delay Tolerant Network functions similar to SMTP, but as pointed out above, the actual destination address is not resolved before sending the message. More importantly, DTN messages can be sent even when there is no simultaneous connection possible between source and destination. The assumption is that different portions of the route will be up from time to time and that the message will be forwarded along the route whenever possible. Today's Internet can't do that because it generally doesn't use
    • "What, have they been double-billing the DoD this whole time?"

      Would this really be a surprise? It's the American way.
    • Re: (Score:3, Interesting)

      by jmorris42 ( 1458 ) *
      > Wasn't that the point of the original ARPANET? To route around broken parts of the network? BBN was involved
      > in that, too. What, have they been double-billing the DoD this whole time?

      Not really, the Internet assumes nodes can change but there is an end to end link possible, if not instantly within a couple of seconds of reconfiguring or outage. This is more like reinventing packet radio or meteor scatter. Mebe they should go talk to some old hams to get some ideas instead of spending millions to r
    • by rmdyer ( 267137 ) on Thursday November 16, 2006 @08:20PM (#16878344)
      In the new "non" net-neutral(ity) world, routing around trouble spots was not a service you paid for. If you need that service it will be an extra $10.00 a month. We love all our customers and hope your experience with our product is to your satisfaction. Now, if you would please take just a few moments and fill out our survey...

    • by Capt. Skinny ( 969540 ) on Thursday November 16, 2006 @08:46PM (#16878624)
      Wasn't that the point of the original ARPANET? To route around broken parts of the network?
      ARPANET was never about sustaining communication in the event of network failure. That goal belongs to the development of packet switching - a separate government funded project by the RAND corporation at about the same time. Sorry, I'm too lazy to dig through my e-mail to find my references.
  • Zonk... (Score:3, Insightful)

    by Anonymous Coward on Thursday November 16, 2006 @07:17PM (#16877664)
    Baby, darling. I appreciate the warning, but you do realize, as a janitor at Slashdot you have a decent amount of power, clout in the nerd world. Even though you're condemning their actions with your comment, you're promoting their site, giving them extra ad revenue with their annoying practices.

    If you want to make a difference, make a stand, stop linking to sites like these. Send them a quick letter saying you'd be happy to send X thousand happy clickers their way if they'd give a single page, printable version. With their "Slashdot it" link at the bottom of the page, they obviously care.
    • You must be just as blink as Zonk. The link to the print version is right next to the "Slashdot it" link!
    • I don't think we should be encouraging printing.

      As well as the environmental issue, which we should all keep chipping away at but is not a large issue here, there is the problem of finding things.

      If information is in large pieces then it is hard to find exactly what you search for. If it is in small pieces, but linked to others, then search engines can help us to search very specifically.

      So slice articles finely, a page on a screen is about right.
  • The article is on five small pages, with no option to see a linkable, printable version.

    Yea, except for maybe the link at the bottom of the article that says "Print".
  • What, AGAIN? (Score:1, Insightful)

    by stanwirth ( 621074 )

    "Researchers at BBN Technologies, of Cambridge, Mass., have begun the second phase of a DTN project, funded by $8.7 million from the Department of Defense's Defense Advanced Research Projects Agency (DARPA)

    The US taxpayer already fund edthis project back in the 70's and 80's. This was the goal of the original arpanet.

    Or maybe BBN is admitting failure, which, in the world of military research contracting is code for "so you should give us another 8-10 million dollars to do the project again."

    a

    • by m94mni ( 541438 )
      See posts above, such as http://it.slashdot.org/comments.pl?sid=207000&cid= 16877742 [slashdot.org] In short, ARPANET assumes there is at least one path. In their case, packets will have to be stored half-way through waiting for a way forward to appear.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Arpanet was fairly decent at it. Then capitalism got involved.

      -Redundancy? Too expensive! CEOs need Porsches more than you need a second path to slashdot!
      -Bandwidth? Bah! We can sucker consumers into buying packages with "up to" 500Mbps speeds, and then only actually provide 128kbit while they're locked into a 50 year contract!
      -Best Path Routing? Our routers are the best in the business! And if you don't want to be routed to our customers by way of Kazakhstan, you'll pay up!

      The old saying that the in
    • Re:What, AGAIN? (Score:5, Informative)

      by xyzzy ( 10685 ) on Thursday November 16, 2006 @07:57PM (#16878090) Homepage
      This is an old wive's tale that deserves to die. The ARPANet was NOT built as an experiment in resiliant networking; it was built by DARPA to connect scientists so they could share all the large computers that DARPA was funding.

      See: Where Wizards Stay Up Late
      http://www.amazon.com/Where-Wizards-Stay-Late-Inte rnet/dp/0684832674 [amazon.com]

      and
      http://www.businessweek.com/1996/38/b349359.htm [businessweek.com]
      • What you're describing is NSFnet, which was based on the arpanet. NSFnet, proposed in the early 80's, proposed to expand the connectivity of the arpanet via several high-speed backbones, for the purpose of scientific data exchange. I collaborated on several projects using NSFnet.

        The arpanet (and I was a registered arpanet user prior to the installation of the NSFnet backbones), was developed for military purposes -- (a fun trick was to send packets all the way around the world via, for example, a node cal

        • by xyzzy ( 10685 )
          Well, congratulations to you. You can certainly choose not to believe the book, written by writers from the New York Times, with the collaboration of Vint Cerf and Bob Kahn. Clearly you have superior knowledge to those two individuals.
    • by kfg ( 145172 )
      Or maybe BBN is admitting failure. . .

      More or less; and no. Let's just say they left room for improvement the first time around.

      To use the ever popular car analogy, networking as an infrastructure is about where the automotive infrastructure was about WWI. It exists on a largish and commercial scale (the French were able move the army to the front lines overnight by rounding up all the taxi cabs in Paris), but it's still largely piggy-backed on older infrastructures (the first roads make expressly for autom
    • See, the problem is that first thing, the "Internet" thing, got away from the powers that be. They let the peasants behind the castle walls, and now it's all spoilt for the really "important" people and their really "important" business.

      So now, see, they've got to start from scratch, and this time, boyo, there's gonna be none of this "Net Neutrality" stuff mucking up the works. And you best believe there won't be any dirty-necked hacker types or dot.com money-for-nothing strivers in the picture. This ti
  • by toby ( 759 ) * on Thursday November 16, 2006 @07:21PM (#16877704) Homepage Journal
    Anyone else feel like they're time travelling when they're reading this?
    • You missed the point of DTN (available when you RTFA) - at any instant in time, no end-to-end connectivity is needed. Standard network protocols (including those developed back in '83) cannot function without end-to-end communication.
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        Well, actually a lot of the oldies didn't rely on end-to-end communication.. IP, X25, and probably proprietary LAN protocols didn't.. but Decnet, some IBM mainframe messaging deal I forget the name of, Fidonet, UUCP, all forwarded data without end-to-end communication. You could get file transfer, E-Mail, and with someting fancy like IBM's deal, remote program execution with you getting the results back, and a super-laggy telnet thing apparently.. (well, to avoid 5-hour lag you'd want end-to-end commu
        • by jandrese ( 485 )
          There is already a web browsing proxy available. I'm not sure if the dtnrg.org link to it is obvious yet (you used to have to be a genius to find it), but it's there. Admittedly, it's just a hack of wwwofled that is designed to work over DTN (basically, when you request a page that isn't in the cache and there is no good end-to-end connectivity, it brings up a webpage asking you how much of the remote page you want (images, scripts, spider down a level or two, perhaps with some keywords to search for, up
  • by dfay ( 75405 ) on Thursday November 16, 2006 @07:29PM (#16877776)
    I'm glad DARPA funds stuff like this. They should perhaps call it DARPA-net or something like that. Also, perhaps this research will result in really cool new inter-networking technology that the public can make use of. Perhaps universities might be the first big users.

    Of course, if that happens, I hope this new inter-networking thing doesn't get privatized... 'cause then all kinds of crazy things might happen.

    (For the uninitiated or those who like things spelled out, see: http://en.wikipedia.org/wiki/History_of_the_Intern et [wikipedia.org])
  • Anyone got a mirror? ;)
    • Re: (Score:1, Funny)

      by Anonymous Coward
      >> Anyone got a mirror? ;)

        Careful, every time I look in a mirror, I see some wierd guy masturbating.
  • From the article :

    BBN has developed a network protocol and code that moves information from node to node as connections become available, and can hold information in persistent storage until a connection is available.

    Wow... what can I say ? - over 8 million bucks to re-discover or re-invent SMTP... (otherwise called email for those who don't remember TLA's)

    Welcome back to August 1982 !
    Read the press release here : http://www.faqs.org/rfcs/rfc821.html [faqs.org]

    • by m94mni ( 541438 )
      I'm not sure if SMTP really can to the same thing. Sure, it allows you to wait for the destination host (MX record) to become available, but what about the nodes in between? if parts of the network between sender and destination goes down, this approach might still be able to get the packet through.
      • by indaba ( 32226 )
        I was thinking of a network where the node between every link could has an SMTP host. That way there are no path / routing issues. If a SMTP host that's nearer the destination comes online, then send the message.

        How do you determine if the host is nearer ?
        Use the DNS LOC Resource Records : Location information, code 29. Associates a geographical location with a domain name. Defined in RFC 1876.

        See : http://en.wikipedia.org/wiki/LOC_record [wikipedia.org]

        • by jandrese ( 485 )
          Geography is not necessarily related to the best data links to choose from. Sometimes it pays to go out of your way to hit a backbone instead of trying to jump through a thousand mom and pop ISPs to get to your destination.
      • So can SMTP. (Score:3, Interesting)

        by khasim ( 1285 )
        The spec provides for "intermediate" servers receiving the message and passing it on.

        Years ago this was duplicated with the old BBS's and phone lines. I'm talking about the single user at a time boards. One phone line. Lots of waiting.

        The boards had the numbers of different boards that they would call as the lines were free (their's and the recipient's). Messages would be passed along whatever route was available until they were received at the destination.

        This model is heavily dependent upon storage, thoug
        • by jandrese ( 485 )
          The DTN routing community is very active. As it turns out, in a Disruption Tolerant Network a lot of the assumptions you make about normal routing can be wrong. For instance, Routing Loops are not necessarily bad (and sometimes necessary). Finding the best path through the network (especially if it's an ad-hoc network) is a hard (in just about every sense of the word) problem.
    • Re: (Score:2, Informative)

      by daiichi ( 888740 )
      This isn't SMTP. SMTP is a layer built atop of TCP/IP for sending of very specialized messages. Apparently BBN's protocol is generic enough to conceivably cache HTTP requests (e.g. the reference to a "google earth map.") So I would give them the benefit of the doubt until more information is forthcoming.

      A real criticism of what BBN is doing is that, heck, my cell phone is low enough on memory already--and I would be very put out having to share that meager space in order to persist that scoutmaster r

      • by jandrese ( 485 )
        There is already a HTTP proxy for DTN. You hit the nail right on the head that this isn't SMTP, it's far more generalized.

        As for storage requirements on the routing nodes, it is up to them to know how much storage they have and the status of their links. If they have no storage available (or if they are configured not to store that kind of data), then they can refuse to take custody of the DTN bundle. If that happens, there are several options available to whoever does have custody of the bundle. The
    • Don't you mean FLA?
    • by icydog ( 923695 )
      SMTP... (otherwise called email for those who don't remember TLA's)

      TLA... otherwise called Three Letter Acronym for those who don't remmeber TLA's.
    • by jc42 ( 318812 )
      BBN has developed a network protocol and code that moves information from node to node as connections become available, and can hold information in persistent storage until a connection is available.

      Wow... what can I say ? - over 8 million bucks to re-discover or re-invent SMTP...


      Funny, but maybe a bit mistargeted. The idea behind SMTP really was to do email via a direct end-to-end TCP link. Caching when the destination couldn't be reached was a "temporary" kludge that was grudgingly added because the Int
  • Yes, SMTP is an amazingly strong example of redundancy. However, we installed redundant fiber at a school I work for within a few days, and just for fun we'd pull plugs randomly and monitor the response time while a alternate link was used. I think 10ms was about average... Then it stopped being fun after a while. We even tested load balancing.

    So my question is.. why are we treating this like its a new thing? This seems like another one of the frequent quasi-ads which seem to be more common lately here
    • by vidarh ( 309115 )
      Sure. Now do the same test with transmitters and receivers using low power radio links from vehicles in motion during bad weather conditions, and try to connect to a server you don't know the IP address to and that isn't in your local DNS cache while your connection is down, or try to maintain a TCP connection as your route to the internet changes and you see periods of minutes or hours without a working end-to-end route at any one time.

      You're not thinking about the type of scenarios these guys are workin

      • True, I wasn't referring to what you had described, but the principle behind it is the same- smart networking devices that monitor link status and dynamically route around it. If it something that is global (or least you work completely within the confines of this system), you're all set, in the same way that access points do handoffs- however, if you aren't privied to that kind of luck, some sort of tunnel (think VPN) allows for a similar solution. Not only does that satisfy said requirement of a dynamic
  • In related news, the DoD has awarded RoundCo Inc. a 100 million dollar contract to develop a circular structure to facilitate the movement of objects with maximum efficiency. RoundCo is currently investigating deploying rubber-based, air-filled rings to fit this need. "This new technology could revolutionize logistics.", says RoundCo CEO David Goodyear-Wheeler.
  • DTN!=ARPANET (Score:1, Informative)

    by CryptoKiller ( 78275 )
    The goal of the Arpanet was to provide resilient packet forwarding in the presence of multiple node failures. However, the Arpanet model does assume that at any given moment there is end-to-end connectivity between the two communicating endpoints. DTNs do not assume that there is necessarily *ever* a direct, end-to-end connection between communicating endpoints. DTNs are store-and-forward networks, much like email or UUCP, they don't look anything like Arpanet or the Internet.
  • by G4from128k ( 686170 ) on Thursday November 16, 2006 @08:14PM (#16878258)
    Although this research is nice, it does not address the worst vulnerabilities of the current internet. Botnets, ARP poisoning, DNS poisoning, pwned routers seem to be a more dangerous risk than mere unreliable components. Cyberterrorism and criminal exploitation of the internet means subverting the system rather than just breaking pieces of it.

    The original internet design carried the naive assumption that all the devices on the net could be trusted -- all the devices assumed the validity of all control data, responses to protocols, etc. In the original model, devices had two primary states -- "unavailable" and "available" where "unavailable" might cover both damaged or overloaded components (a slightly more sophisticated version assesses capacity or latency as gradations between the binary unavailable/available dichotomy). In this one dimensional two-state model, disruption tolerance means routing around "Unavailable" or overloaded components.

    Yet the rising threat is from malicious entities that want to subvert the network's functioning, not just disable it. Spam, phishing, click fraud, and extortion depend on twisting a functioning network, not just poking holes in the network -- all the parts remain "available" but their data and responses become deceptive. Thus future fault-tolerant networks will need to distinguish between trustworthy and untrustworthy components. This suggests employing techniques such as cryptographic signatures, polling systems, blacklisting, FOAF, firmware integrity checks, and device-to-device secret questions.

    Designing a more robust internet is a laudable task but we need to spend more effort on securing against the true threat of untrustworthy components rather than unavailable components.
    • by jandrese ( 485 )
      Yes, DTN does not solve the problems it was not designed to solve. DTN is all about getting data through networks with intermittent connectivity, it has nothing to do with any of the stuff you listed.

      This is kind of like asking "Why are medical companies developing cures to minor diseases that only tens of thousands of people have when they still haven't cured cancer?"
  • by thanasakis ( 225405 ) on Thursday November 16, 2006 @08:19PM (#16878332)
    It is clear from the article that they are aiming for something more than OSPF or other link state routing protocols. If a link is cut inside a network, OSPF adjusts so that traffic is routed through alternative paths. But, until there is convergence (which is quite fast in most cases), packets may be lost. Packet drops do tend to occur if a router cannot find a suitable route to a destination, if it is able to find a route but the link to that route is down, or even if the queue on that link is congested (full). That's the very nature of our present best effort internet.

    It appears to me that these guys try to address some of these "shortcomings" by making certain privisions that can guarantee packet delivery, even in a overly late fashion. A routing instability, lost routes or links should not be able to cause packet drops if they have it right.

    However, I used the quotes in "shortcomings" because I am not entirely certain that this has not been tried before. If, instead of a best effort packet routing service, you try to invent a "smart" network layer that can guarantee stuff like ordered delivery (packets are delivered in the order they departed), assured delivery (even with great delays) etc, you are basically trying to invent a (gasp!) connection oriented service. Not that connection oriented technologies are inherently bad, but, well, they are certainly an order of magnitude harder to implement. Anyone remembers OSI? It might as well be easier to leave IP simple as it is and try to move some smartness to the upper layers.

    Additionally, it would be better to try to build on top of unreliable services like IP and construct stuff like SMTP (as a previous poster very cleverly pointed out), that can function even if parts of the network are mulfunctioning.

    Well, anyway, you might want also to take a look at the efforts on the interplanetary internet [ipnsig.org], this article reminded me of it.
    • Umm, if I'm not mistaken, the whole point of TCP is to provide guaranteed delivery (at least, as long as delivery is possible). If some packets are lost, no acknowledgment is sent from the destination to the source, and the source re-sends the packets.
      • Re: (Score:3, Informative)

        Right, but only if you've got end-to-end connectivity. If you don't, TCP breaks, and you get zero delivery. There are situations (see data mules and wireless sensor networks in disconnected environments) where you simply cannot have a complete end-to-end link, but periodic links within the larger path are still possible or even predictable. DTN can take advantage of single TCP connections without requiring the entire set of nodes from source to destination to be up at one time, and, if this project works
    • Re: (Score:3, Insightful)

      by ebyrob ( 165903 )
      The problem is, discarding extraneous packets is actually a VERY GOOD THING when it comes to the internet. Several store and forward systems pre-dated the current TCP/IP stack, but guess what. They weren't as efficient in terms of required hardware resources or latency. This is because in a store and forward network, certain problems (like network cards going nuts and spewing tons of garbage) can cause lots and lots of data to accumulate in the network, and then you have to wait for every single packet t
  • by macdaddy ( 38372 )
    Routing around holes in a network... Sounds like the basic functionality of routing protocol to me. So they're getting paid big bucks to re-invent IGPs like IS-IS [wikipedia.org], OSPF [wikipedia.org], RIP [wikipedia.org] (though this POS creates more holes than it routes around), IGRP [wikipedia.org], EIGRP [wikipedia.org] or an EGP like BGP [wikipedia.org]? Hell when it comes right down to it good ole IEEE 802.1D [wikipedia.org] is a layer-2 routing protocol (when you think about how it actually works and not the generic description you read about in references). Hello, wheel.
    • by jandrese ( 485 )
      Except they're not routing _around_ holes (because there are no routes in the networks where DTN is designed to be deployed), they are routing through the holes. The point is that if you have a reasonable expectation that a link will be back up at some point, and there is no other way to go, then it's better to sit on the packets and shove it through when that link does come back up than just kill the connection and tell the user to try again later.

      This is especially important when there are multiple "h
  • Are we talking outages of 20ms or 5 minutes or 3 days or what?
  • Welcome to networking 101. The trouble with a fully meshed, multi-vendor layout is the cost, and few companies are willing to pony up the required loot to maintain a completely redundant network.
    • by vidarh ( 309115 )
      This is NOT about redundancy. This is about methods for tolerating downtime. It could be used with network where redundancy is infeasible, or combined with redundancy to be able to handle extraordinary situations.
      • I fail to see the difference. What IS redundancy if not a "method for tolerating downtime?" I admit that I didn't spend much time on the article, but mesh is mesh. You either have it or you don't. Is this a sales pitch for some new technique that is entirely different? Are we using psychics to send packets yet? Were those cracks about interns running around with flash drives quotes from the article?

        Naturally, organizations should have fall back procedures for catastrophes. That's like saying there sh
        • by vidarh ( 309115 )
          Redundancy is a method for avoiding downtime, not for tolerating it. You add multiple paths to make sure there is always an unbroken chain from you to "somewhere". That works for scenarios where each link has a certain percentage chance of being up, and where that chance is fairly high, so that you can get a sufficient uptime by adding multiple independent links and playing the percentages.

          This technology is a solution to a different problem: A situation where it is expected to be periods with no path bet

  • DTN (Score:2, Informative)

    by Anonymous Coward
    No, this has not been done before in this manner. The internet does not communicate when disconnected. Try to send a file to a machine that is turned off or not connected to the net and see what you get?

    This type of network, DTN (Disruption tolerant network - which btw, is similar to DTN - delay tolerant network - (see IETF working group)) is oriented towards disconnected operation, mobile nodes and ad-hoc environments.

    BBN is not the only participant (though it is a big one). The project includes various
    • by jc42 ( 318812 )
      Try to send a file to a machine that is turned off or not connected to the net and see what you get?

      What you get is called "Usenet", and it's been doing just that quite successfully for a few decades now. ;-)

      Usenet originally ran mainly on top of UUCP, invented at Bell Labs back in the 1970s. UUCP implemented the same sort of scheme some years before the Internet came into existence. The general term is "store-and-forward".

      It's all covered in many "intro to networking" courses.
  • ... some kind of DARPAnet birthday celebration ?
  • Press the "Print" button on the page and it opens up the article in a printable version window. The layout wasnt that bad to begin with though. i dont understand why you all complain so much
    • Press the "Print" button on the page and it opens up the article in a printable version window
      Let me guess, to actually print it you press a "Print Preview" button?
  • Of course, alot of people/ISPs do this already (not at the internet level but within their network), trouble is when one of the links go down, the 'failover' route gets it's own traffic and the traffic from the broken route. AND there's not enough bandwidth over this route to handle both sets of traffic.

    Happened to me many many times...as a customer of lots of different ISPs.
  • that gives you a one-page format of the article. Counter-intuitive? yeah, that's right.

    Not as bad as the macromedia paged website the other week. Sheesh!!!!!
  • As with any news article, it is trying to explain the concepts to a general audience. This always leads to misconceptions about what the technical solutions and problems are. Primarily, DTNs are not designed to "fix" the internet, it is designed to deal with disruptions at the edges, and to deal with challenged networking environments (primarily mobile ones). If you are interested in some technical information (some shameless self-promotion as the DieselNet project mentioned in the article belongs to mys

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...