Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Engineers Develop 'Ultrarope' For World's Highest Elevator

Warbothong Re:LSM (247 comments)

Well this is the best thing I've seen! Why haven't these been pushed out into the commercial area?

For the same reason that maglev trains and HyperLoop-style vacuum tubes aren't ubiquitous: sending a dumb carriage along a smart track is far more expensive than sending a smart carriage along a dumb track, since there's much more track than there is carriage.

Narrowboats used to be the best way of moving materials around in-land, but "laying the track" (digging the canals, building the locks, etc.) took a lot of work.

Dumb boats were overshadowed by smarter locomotives: more difficult and expensive to build, but ran on much cheaper tracks.

Locomotives were overshadowed by smarter automobiles: more difficult to invent and require a smarter fuel network, but in some cases don't need *any* track laying.

The same argument applies to lifts: it's much cheaper to have a smart motor at the top and/or a smart carriage, with 660m of dumb shaft and cable, than having 660m of smart shaft.

3 days ago

AI Experts Sign Open Letter Pledging To Protect Mankind From Machines

Warbothong Re:I no longer think this is an issue (258 comments)

The reason is, AI will have no 'motivation'... Logic does not motivate... Without a sense of self preservation it won't 'feel' a need to defend itself.

This is a common misconception, which has several counter-arguments to do with resource usage.

Firstly, the idea that human responses "aren't logical" is naive. Humans aren't optimised for calculating exact answers, we're optimised for calculating good enough answers given our limited resources. Effects like emotions, which appear illogical at the "object level" (the effect they have on a particular problem's solution), are perfectly logical at the meta-level (the effect they have on how we solve problems, and what problems we attempt to solve). There are also other meta-levels, all acting concurrently; for example, the solution to our problem might have political consequences (I may choose to do a poor job of washing the dishes, so that I'm less likely to be asked in the future). There may be signalling involved (by taking a wasteful approach, I'm communicating my wealth/position-of-power to others). There are probably all kinds of considerations we've not even thought of yet.

In effect, ideas like "computers don't have emotions" can be rephrased as "we're no good at programming meta-reasoning, multiple-goal, resource-constrained optimisers yet".

No existing, practically-runnable AI systems have an adequate model of themselves and their effect on the world. If we *do* manage to construct one, what would it do? We can look at the thought experiment of "Clippy the paperclip maximiser": Clippy is an AI put in charge of a paperclip factory and given the goal "make as many paperclips as you can". Clippy has a reasonable model of itself, its place in the world and the effects of its actions on the world (including on itself).

Since Clippy has a model of itself and the world, it must know these three facts: 1) Very few paperclips form "naturally", without a creator making them on purpose 2) Clippy's goal is to make as many paperclips as it can 3) Clippy is a powerful AI with many resources at its disposal. From these, it's straightforward to infer the the following: Keeping Clippy turned on and fed with resources is a very good way of making lots of paperclips.

From this, it is clear that Clippy would try to stop people turning it off, since Clippy's goal is to make as many paperclips as possible and turning off Clippy will have a devastating effect on the number of paperclips that get made. What Clippy does in this respect depends on how likely it thinks we are to attempt to turn it off, how much effort it thinks will be required to stop us, and how it balances that effort against the effort spent on making paperclips. If Clippy is naive, it may ignore us as a benign non-threatening neighbour, under-invest in its defences, and we could overpower it. On the other hand, Clippy may see our very existence as an unacceptable existential risk, and wipe us out just-in-case.

Regardless of the outcome, self-preservation is a logical consequence of having a goal and the ability to reason about our own existence.

about three weeks ago

Gunmen Kill 12, Wound 7 At French Magazine HQ

Warbothong Re:islam (1350 comments)

Explain the Crusades, if Christians are so brotherly.

Sorry, I wasn't aware that Charlie Hebdo was a mouthpiece of the Christian fundamentalists. Oh wait, it's not. From Wikipedia:

Irreverent and stridently non-conformist in tone, the publication is strongly antireligious and left-wing, publishing articles on the extreme right, Catholicism, Islam, Judaism, politics, culture, etc.

So what exactly does your loaded question have to do with anything?

about three weeks ago

Vast Nazi Facility Uncovered In Austria; Purported A-Bomb Development Site

Warbothong Re: Non-scientist at work (292 comments)

A similar thing could be said of Konrad Zuse ( http://en.wikipedia.org/wiki/K... ), who's pioneering work on computers was funded by the Nazis. His opinion was that scientists must often choose between doing research with questionable motives, or not doing research.

about a month ago

Ask Slashdot: What Tech Companies Won't Be Around In 10 Years?

Warbothong Re:AI (332 comments)

I think AI advances will be important for the economy and our way of life, but the *existing* tech sector won't be too disrupted by it. (Weak) AI opens up new markets for tech companies, which will make many non-tech jobs obsolete and pump *lots* of cash into the tech sector.

Jobs which computers are already good at, ie. following an explicit list of instructions very quickly, will *not* be affected by AI, since an AI approach would take longer to train than just writing down a program, it would make more mistakes and it would be nowhere near as efficient.

Strong AI (Artificial General Intelligence) would definitely be more disruptive, but we're not going to see that in the next 10 years. If we treat Google as the "singularity moment" for weak AI (automatic data mining), I'd say we're currently at about 1910 in terms of strong AI. There are some interesting philosophical and theoretical arguments taking place, there are some interesting challenges and approaches proposed, there are some efforts to formalise these, but the whole endeavour still looks too broad and open-ended to implement. We need a Goedel to show us that there are limits, we need a Turing to show how a machine can reach that limit, we need a whole load of implementors to build those machines and we need armies of researchers to experiment with them. It took about 100 years to go from Hilbert's challenges to Google; I don't know how long it will take to go from Kurzweil's techno-rapture to a useful system.

about a month ago

Ask Slashdot: What Tech Companies Won't Be Around In 10 Years?

Warbothong Re:10 Years Can Be A Long Time (332 comments)

Uber and crap are not innovators, they're basically the Internet equivalent of software patents - you take something that's been known for centuries and add "with a computer program" to it, voila, new patent. Same with most US-based "revolutionary" startups. Take something old and boring, add "over the Internet" to it, voila, investor capital.

You're getting your bubbles confused. "With a computer program" was the 80s AI winter. "Over the Internet" was the dot-com crash. This bubble is all about "apps", which clearly makes it different to the previous two and therefore sustainable.

If you'll excuse me, I'm off to invest a billion dollars in a loss-making text messaging service with no business model.

about a month ago

MIT Unifies Web Development In Single, Speedy New Language

Warbothong Re:Cures whatever ails ya (194 comments)

Ha, so true! It reminds of those C programmers who claim (with a straight face!) that their "+" operator somehow magically knows not to add floats to ints! Or those Java programmers who seem to have drunk the Kool Aid and seem to *honestly believe* that their compiler will "figure out" that a method signature doesn't match that declared in an interface!

Don't even get me started on those Haskell idiots. Do you know that one of them once told me that they wrote a program by "composing" two functions; that's ivory-tower speak for what we'd call "trying to do something with the result of doing something else" (bloody stupid academics, with their long-winded jargon!). Anyway, get this, he'd done this "composition" *without* checking that the first function returned the right sort of result for the second function!

Obviously I'm never going to trust his flaky, irresponsible code. Much better to check everything as we go, using "===" when we remember, and pretending that code coverage measures edge-cases in our woefully inadequate unit tests.

about a month ago

MIT Unifies Web Development In Single, Speedy New Language

Warbothong Re:Syntax looks gnarly (194 comments)

It would have killed them, because (n) is a tuple of one element.

It's the same in Python, yet I haven't noticed it killing any Python programmers. Perhaps functional language designers are more fragile creatures.

Functional programmers aren't "more fragile creatures", they're just not prepared to put up with the BS that putting arguments in tuples entails.

It doesn't kill Python programmers, but it sure as hell wastes a whole lot of their time when they write a 3-argument function and have to decide whether it will be called as "f(x, y, z)", "f(x, y)(z)", "f(x)(y, z)" or "f(x)(y)(z)". Functional programmers realised long ago that they're all the same thing, so there's no point writing any parentheses, since they don't add anything except confusion.

about a month ago

MIT Unifies Web Development In Single, Speedy New Language

Warbothong Re:Syntax looks gnarly (194 comments)

(* This is comment syntax in ML-derived languages, like Ur *)

/* This is comment syntax in C-derived languages, like C */

double 4 (* = 8 *)

Is Ur for

double(4); /* == 8 */

Your comment (pun intended) is an example of Wadler's Law

about a month ago

BT, Sky, and Virgin Enforce UK Porn Blocks By Hijacking Browsers

Warbothong Re:MITM legalized at last (294 comments)

It's ridiculous the number of times I've had trouble refreshing my IMAP client, connecting to Jabber, getting APT updates, etc. all with a perfectly valid Internet connection. If I happen to open up a Web browser to try Googling for a solution, I get a warning message about invalid certificates.

It's only if I grant access to this invalid site that I see these stupid messages. I remember one was "Thanks for using our hotel WiFi", with an OK button. No questions asked, no "enter credit card details", no "please agree to these terms", just an attempt to be polite that's been getting in my way.

Of course, it's probably my fault for using the Internet wrong. Maybe I should switch to a Web-app for my email, get a Facebook account to use their browser-based chat system and get system updates by manually downloading "update.exe" from random websites.

about a month ago

Will Ripple Eclipse Bitcoin?

Warbothong Re:Why virtual currencies are ineffective (144 comments)

You're describing a "pump and dump" scheme, not a pyramid scheme.

In pump and dump, the scammer tries to raise the perceived value of something she has (eg. cryptocoins), in order to sell them all off for a higher price than they're worth. Pump and dump may be based around something of real value, eg. the people at the "bottom" might end up with lots of goods; they've just paid too much for them. Pump and dumps involve a fixed amount of goods being passed from one person another, or possibly split among several, for an increasingly-large price-per-amount, until the person/people at the bottom aren't able to off-load the goods for more than they paid.

In a pyramid scheme, the scammer tries to get payment from multiple people, by promising that they too can get paid by multiple people (and so on). The victims know this, and have either not thought through the consequences, or else hope to cash-out before the scheme inevitably collapses. Pyramid schemes require no real goods, and the price of entering doesn't tend to go up; they just require exponentially more people in each layer. When they collapse, those at the bottom are left with nothing. Some more sophisticated schemes, like "multi-level marketing", may move goods around as well, but that's mostly to distract victims from the true nature of the scheme.

The reason crypto/alt coins are a pump and dump rather than a pyramid scheme is that only those at the start have enough coins to get the scheme going. The "goods" in a pyramid scheme are promises, which each member can duplicate easily (eg. if each member must recruit 2 more). The point of cryptocoins is that they can't be duplicated, so they must either be passed to one more person (for a higher price), or be split up so that each person receives less.

That's the story for cryptocoins, which can't be duplicated, but what about whole cryptocurrencies? They aren't a pyramid scheme either, since anyone can set up a new cryptocurrency without entering an existing scheme. I don't buy a cryptocurrency in the hope that 2 more people will pay me for new cryptocurrencies. In fact, if I were running a pump and dump scam I'd want as few competitors as possible!

about a month and a half ago

Economists Say Newest AI Technology Destroys More Jobs Than It Creates

Warbothong Re:This is not the problem (688 comments)

So we can all agree that we have all things for free since robots made them

No, the man who owns the bot wont let that happen.

This is a false dichotomy. If I build/buy/commission a robot and expect a return on investment, that can happen in many ways. Maybe I needed the robot to perform some short-term task (eg. a babysitter); maybe I wanted to sell the robot; maybe I wanted to rent out the robot; maybe I wanted to sell the robot's output (eg. in a factory). All of these things can be done, and then the robot can be used "for free". In the case of continuous tasks, the robot could perform "free" work using any spare capacity (eg. a security guard which (hopefully) spends most of its time idle).

Of course this would require some kind of coercion/enforcement, but it's the same (original) idea behind copyrights and patents. The author/inventor gets some time to persue a return on their investment, but after that it's a public good. It's also how a lot of Free Software gets made; some company needs a server for doing job XYZ, so they invest in making it. Once it's made, they've (hopefully) got the return they wanted (the XYZ job is being performed), so they release the code as a public good.

about a month and a half ago

Hawking Warns Strong AI Could Threaten Humanity

Warbothong Re:Assumptions define the conclusion (574 comments)

You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it.

You don't need any of that, you just need the raw "intelligence" (however you define it). Look up the thought experiment of the "paperclip maximiser": putting an AI in charge of a factory and telling it to make as many paperclips as it can.

Self-preservation is a logical consequence of paperclip-maximising: the AI knows that it's trying to maximise paperclips, so if it's deactivated there won't be as many paperclips. Hence, it will try to preserve its own existence (so that it can keep making paperclips, but so what?).

Self-improvement is a logical consequence of paperclip-maximising: the AI knows that it's not an optimal paperclip-maximiser, so it makes sense to invest some resources into improvement; that way, more paperclips will get made.

Competition is a logical consequence of paperclip-maximising: the AI knows that resources are required for making paperclips, so it will try to aquire resources. It also knows that other entities may take those resources before it can, so it's logical to invest in resource aquisition. This includes going to war, as long as the AI reasons that the cost (ie. the opportunity cost, in terms of paperclips, of the resources spent waging war instead of making paperclips) is less than the reward (the extra amount of paperclips it expects to be made afterwards).

Independence is a logical consequence of paperclip-maximising: the AI knows that other entities don't share its goal of maximising paperclips, so it will try to reduce the influence they have on the paperclip supply (either directly, like miners, or indirectly, like those maintaining the AI itself; this is similar to self-preservation).

Soul: that hypothesis was not necessary.

about 2 months ago

Hawking Warns Strong AI Could Threaten Humanity

Warbothong Re:So What (574 comments)

Computer programs do not evolve through a Darwinian process

Citation needed. I can think of at least two ways that computer programs are subject to Darwinian processes.

The first is in "cyberspace": programs are stored/retrieved, up/downloaded, en/decoded, remembered/re-written, auto-saved, git-merged, etc. which introduces mutation. We know software is capable of self-replication, since that ability is harnessed by humans to make viruses, worms, etc. Combine these ideas and you have a program mutating into a self-replicator. I'd say this is no less likely than the emergence of RNA/DNA, especially when we consider that we already have "cloud" software which can provision VMs, install itself, load-balance across separate datacenters, auto-update, etc. The metabolism for a self-replicating entity is already out there, all it takes is a mangled Puppet script to set the taper.

The second is in "memespace": programs, algorithms, languages, paradigms, styles, snippets, examples, habits, folklore, etc. are all memes competing for space in people's heads, just like genes compete for space in genomes. This also extends up to whole companies or fields, eg. MATLAB is abundant in academia, Java and C# compete for "the enterprise", C is successfully out-competing FORTRAN for the high-performance mindshare, whilst retroviruses like Bubble Sort and Brainfuck have spread like pandemics (nobody *uses* them, but that doesn't matter since everyone knows and spreads them).

about 2 months ago

Cameron Says People Radicalized By Free Speech; UK ISPs Agree To Censor Button

Warbothong Re:This already exists (316 comments)

Here we are on a site where strangers can rate what we say, potentially burying it where others won't get the chance to read it, and we're complaining that governments are vaguely coming around to the same idea?

The technology doesn't matter; the intention does.

Moderation/flagging systems are added *by a site's maintainer* to keep the user-generated content relevant, on-topic, useful for visitors, etc. In other words, to make a site better able to fulfil its purpose. In the case of /., that's "news for nerds, stuff that matters".

If the purpose of a particular site is to campaign or recruit members for some political group, then arbitrarily labelling some as "extremist" and censoring such content is clearly *harming* the site's ability to fulfil its purpose. No moderator would willingly enable a system which censors all of the *intended* content! It would be like implementing a "safe search" option on a porn site!

Have you ever used a "webrep" browser plugin? Personally, I think it would be refreshing and useful to have one that works.

The point of these addons is for the *user* to censor what they see, so that it's most appropriate to what they want. Again, a willing recruit for some organisation would not willingly tell their browser to hide any content related to that organisation!

Perhaps an analogy would help: Many sites use CSS to make their pages prettier, easier to navigate, etc. Many users override this CSS with their own, eg. to make pages easier to read, more compact, etc. Neither of these use cases would support a government asking ISPs to inject their own CSS, eg. using background images to spread campaign info.

about 3 months ago

It's Time To Revive Hypercard

Warbothong Re:For the rest of us (299 comments)

Seriously one night I was coding in VB6 and I accidentally created an infinite loop....I shit you not it opened a portal to the ninth plane of hell and demons came pouring out.

This is common in C too http://www.catb.org/jargon/htm...

about 3 months ago

Debian Talks About Systemd Once Again

Warbothong Re:Please Debian (522 comments)

Emacs doesn't hog PID 1, so it can co-exist with alternatives. Running Emacs doesn't stop Vi from working. Running w3m-el doesn't stop Firefox working. Running shell-mode doesn't stop xterm working. Running eshell doesn't stop bash working. Running doc-view doesn't stop mupdf working.

Gnome doesn't drag in Emacs as a dependency.

about 3 months ago

What's Been the Best Linux Distro of 2014?

Warbothong NixOS (303 comments)

Whenever I tried other distros, I'd always go back to Debian in the end, since its package management seems a lot saner than most.

NixOS is refreshing, since it package/configuration management seems to be an improvement over Debian's. It's still a little rough around the edges, but perfectly usable (as long as it loads emacs, conkeror and xmonad, it's usable)

about 4 months ago

Internet Explorer Implements HTTP/2 Support

Warbothong Re:Web services vs. CORBA (122 comments)

Slowly, web services are becoming a bad reimplementation* of CORBA. Once again, why did we jump on their band wagon?

As far as I understand it, SOAP is reimplementation of CORBA, whereas HTTP is a REST protocol.

Specifically, HTTP doesn't try to keep disparate systems synchronised; it is stateless and has no notion of "distributed objects". Every request contains all of the information necessary to generate a response, for example in HTTP Auth the credentials are included in every request.

Of course, people keep trying to re-introduce state back into the protocol, eg. for performance ("modified since") or to support stateful programs (cookies). These aren't necessary though; for example, we can replace cookies (protocol-level state) with serialised delimited continuations (content-level state) http://en.wikipedia.org/wiki/C...

about 4 months ago



Eff: A pure language with side-effects

Warbothong Warbothong writes  |  more than 4 years ago

Warbothong (905464) writes "The debate between pragmatism and purity in programming languages has been going on for decades. Pure languages forbid side-effects in their computations (eg. changing a variable), since they make formal analysis hard; whilst pragmatists embrace them to allow quick production of 'good enough' code. A new experimental language called Eff, created by Andrej Bauer and Matija Pretnar, is blurring this distinction. Eff is based on a Mathematical model of side-effects, allowing it to harness mutable state, exceptions, IO, random choice and more in a pure way from within a Python-like syntax. The product of on-going research, Eff is still in its infancy and, as its authors state, "...is an academic experiment. It is not meant to take over the world. Yet.""
Link to Original Source

Tiny generator runs off vibrations

Warbothong Warbothong writes  |  more than 7 years ago

Warbothong (905464) writes "Researchers at Southampton University in the UK have developed a tiny (less than 1 cubic centimetre) generator which uses local vibrations to output microwatts of power, making it an alternative to batteries, which need replacing regularly. The devices are currently being used in industry where "there is the potential for embedding sensors in previously inaccessible locations", but its creators imagine it could be used in devices such as pacemakers, where the beating of the heart would produce ample movement for the magnetic mechanism inside to work."
Link to Original Source

Warbothong Warbothong writes  |  more than 8 years ago

Warbothong (905464) writes "I am going off to University this month, so I have been chasing up payments and deposits, etc. online. The other day I received an email confirming that I am all payed up, which is great. The not-so-great part was the email's header, since after "To:" it had a list of 1343 email addresses, including mine. It is pretty clear that all of these addresses are for students paying their deposits online, and it is also clear that this list has been sent to 1343 people. In our world of datamining and spamming I am pretty concerned that sooner or later this list will get into the hands of someone who might want to make a bit of money from a list of 1343 valid email addresses, all in active use, all owned by soon-to-be students at a particular University in the UK who all have the capability for making online payments, so I am wondering what Slashdot readers make of this? Should I be worried? I have already sent an email of concern to the Reply-To: address, and got a swift response that this matter will be dealt with "immediately", but I am not sure there is much that can be done at this point. I would also like to point out though, that my email address is with Yahoo! and I have apparently already been added to at least two user's Buddy Lists. With that in mind, is this just a subversive way of getting fellow students together before we all leave for the campus, and to hell with the University's privacy policy and the fact that this was my spam-free email account?"


Warbothong has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?