×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

  • Inside Ford's New Silicon Valley Lab

    An anonymous reader writes Engadget takes a look at Ford's new Research and Innovation Center located in Palo Alto. The company hopes to use the new facility to speed the development of projects such as autonomous cars and better natural voice recognition. From the article: "This isn't Ford's first dance with the Valley — it actually started its courtship several years ago when it opened its inaugural Silicon Valley office in 2012. The new center, however, is a much bigger effort, with someone new at the helm. That person is Dragos Maciuca, a former Apple engineer with significant experience in consumer electronics, semiconductors, aerospace and automotive tech. Ford also hopes to build a team of 125 professionals under Maciuca, which would make the company one of the largest dedicated automotive research teams in the Valley."

    36 comments | 8 hours ago

  • Fujitsu Psychology Tool Profiles Users At Risk of Cyberattacks

    itwbennett writes Fujitsu Laboratories is developing an enterprise tool that can identify and advise people who are more vulnerable to cyberattacks, based on certain traits. For example, the researchers found that users who are more comfortable taking risks are also more susceptible to virus infections, while those who are confident of their computer knowledge were at greater risk for data leaks. Rather than being like an antivirus program, the software is more like "an action log analysis than looks into the potential risks of a user," said a spokesman for the lab. "It judges risk based on human behavior and then assigns a security countermeasure for a given user."

    30 comments | 4 days ago

  • Google Search Will Be Your Next Brain

    New submitter Steven Levy writes with "a deep dive into Google's AI effort," part of a multi-part series at Medium. In 2006, Geoffrey Hinton made a breakthrough in neural nets that launched Deep Learning. Google is all-in, hiring Hinton, having its ace scientist Jeff Dean build the Google Brain, and buying the neuroscience-based general AI company DeepMind for $400 million. Here's how the push for scary-smart search worked, from mouths of the key subjects. The other parts of the series are worth reading, too.

    45 comments | about two weeks ago

  • An Open Letter To Everyone Tricked Into Fearing AI

    malachiorion writes If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, and to one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."

    227 comments | about two weeks ago

  • AI Experts Sign Open Letter Pledging To Protect Mankind From Machines

    hypnosec writes: Artificial intelligence experts from across the globe are signing an open letter urging that AI research should not only be done to make it more capable, but should also proceed in a direction that makes it more robust and beneficial while protecting mankind from machines. The Future of Life Institute, a volunteer-only research organization, has released an open letter imploring that AI does not grow out of control. It's an attempt to alert everyone to the dangers of a machine that could outsmart humans. The letter's concluding remarks (PDF) read: "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls."

    258 comments | about two weeks ago

  • The New (Computer) Chess World Champion

    An anonymous reader writes: The 7th Thoresen Chess Engines Competition (TCEC) has ended, and a new victor has been crowned: Komodo. The article provides some background on how the different competitive chess engines have been developed, and how we can expect Moore's Law to affect computer dominance in other complex games in the future.

    "Although it is coming on 18 years since Deep Blue beat Kasparov, humans are still barely fending off computers at shogi, while we retain some breathing room at Go. ... Ten years ago, each doubling of speed was thought to add 50 Elo points to strength. Now the estimate is closer to 30. Under the double-in-2-years version of Moore's Law, using an average of 50 Elo gained per doubling since Kasparov was beaten, one gets 450 Elo over 18 years, which again checks out. To be sure, the gains in computer chess have come from better algorithms, not just speed, and include nonlinear jumps, so Go should not count on a cushion of (25 – 14)*9 = 99 years."

    107 comments | about a month ago

  • What Happens To Society When Robots Replace Workers?

    Paul Fernhout writes: An article in the Harvard Business Review by William H. Davidow and Michael S. Malone suggests: "The "Second Economy" (the term used by economist Brian Arthur to describe the portion of the economy where computers transact business only with other computers) is upon us. It is, quite simply, the virtual economy, and one of its main byproducts is the replacement of workers with intelligent machines powered by sophisticated code. ... This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. ... Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will find a solution — human beings always do — but it may not be the one for which we began this technological revolution."

    This follows the recent Slashdot discussion of "Economists Say Newest AI Technology Destroys More Jobs Than It Creates" citing a NY Times article and other previous discussions like Humans Need Not Apply. What is most interesting to me about this HBR article is not the article itself so much as the fact that concerns about the economic implications of robotics, AI, and automation are now making it into the Harvard Business Review. These issues have been otherwise discussed by alternative economists for decades, such as in the Triple Revolution Memorandum from 1964 — even as those projections have been slow to play out, with automation's initial effect being more to hold down wages and concentrate wealth rather than to displace most workers. However, they may be reaching the point where these effects have become hard to deny despite going against mainstream theory which assumes infinite demand and broad distribution of purchasing power via wages.

    As to possible solutions, there is a mention in the HBR article of using government planning by creating public works like infrastructure investments to help address the issue. There is no mention in the article of expanding the "basic income" of Social Security currently only received by older people in the U.S., expanding the gift economy as represented by GNU/Linux, or improving local subsistence production using, say, 3D printing and gardening robots like Dewey of "Silent Running." So, it seems like the mainstream economics profession is starting to accept the emerging reality of this increasingly urgent issue, but is still struggling to think outside an exchange-oriented box for socioeconomic solutions. A few years ago, I collected dozens of possible good and bad solutions related to this issue. Like Davidow and Malone, I'd agree that the particular mix we end up will be a reflection of our culture. Personally, I feel that if we are heading for a technological "singularity" of some sort, we would be better off improving various aspects of our society first, since our trajectory going out of any singularity may have a lot to do with our trajectory going into it.

    628 comments | about a month ago

  • Ars Reviews Skype Translator

    Esra Erimez writes Peter Bright doesn't speak a word of Spanish but with Skype Translator he was able to have a spoken conversation with a Spanish speaker as if he was in an episode of Star Trek. He spoke English. A moment later, an English language transcription would appear, along with a Spanish translation. Then a Spanish voice would read that translation.

    71 comments | about a month ago

  • Research Highlights How AI Sees and How It Knows What It's Looking At

    anguyen8 writes Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do? Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. "survival of the school-bus-iest"). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.

    130 comments | about a month ago

  • Economists Say Newest AI Technology Destroys More Jobs Than It Creates

    HughPickens.com writes: Claire Cain Miller notes at the NY Times that economists long argued that, just as buggy-makers gave way to car factories, technology used to create as many jobs as it destroyed. But now there is deep uncertainty about whether the pattern will continue, as two trends are interacting. First, artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Self-driving vehicles are an example of the crosscurrents. Autonomous cars could put truck and taxi drivers out of work — or they could enable drivers to be more productive during the time they used to spend driving, which could earn them more money. But for the happier outcome to happen, the drivers would need the skills to do new types of jobs.

    When the University of Chicago asked a panel of leading economists about automation, 76 percent agreed that it had not historically decreased employment. But when asked about the more recent past, they were less sanguine. About 33 percent said technology was a central reason that median wages had been stagnant over the past decade, 20 percent said it was not and 29 percent were unsure. Perhaps the most worrisome development is how poorly the job market is already functioning for many workers. More than 16 percent of men between the ages of 25 and 54 are not working, up from 5 percent in the late 1960s; 30 percent of women in this age group are not working, up from 25 percent in the late 1990s. For those who are working, wage growth has been weak, while corporate profits have surged. "We're going to enter a world in which there's more wealth and less need to work," says Erik Brynjolfsson. "That should be good news. But if we just put it on autopilot, there's no guarantee this will work out."

    688 comments | about a month ago

  • AI Expert: AI Won't Exterminate Us -- It Will Empower Us

    An anonymous reader writes: Oren Etzioni has been an artificial intelligence researcher for over 20 years, and he's currently CEO of the Allen Institute for AI. When he heard the dire warnings recently from both Elon Musk and Stephen Hawking, he decided it's time to have an intelligent discussion about AI. He says, "The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. ... To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations." Etzioni adds, "If unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity — and even save lives. Allowing fear to guide us is not intelligent."

    417 comments | about a month and a half ago

  • A Common Logic To Seeing Cats and the Cosmos

    An anonymous reader sends this excerpt from Quanta Magazine: "Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

    Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.

    "They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.

    45 comments | about 1 month ago

  • Hawking Warns Strong AI Could Threaten Humanity

    Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."

    574 comments | about 2 months ago

  • Here's What Your Car Could Look Like In 2030

    Nerval's Lobster writes: If you took your cubicle, four wheels, powerful AI, and brought them all together in unholy matrimony, their offspring might look something like the self-driving future car created by design consultants IDEO. That's not to say that every car on the road in 2030 will look like a mobile office, but technology could take driving to a place where a car's convenience and onboard software (not to mention smaller size) matter more than, say, speed or handling, especially as urban areas become denser and people potentially look at "driving time" as a time to get things done or relax as the car handles the majority of driving tasks. Then again, if old science-fiction movies have proven anything, it's that visions of automobile design thirty or fifty years down the road (pun intended) tend to be far, far different than the eventual reality. (Blade Runner, for example, posited that the skies above Los Angeles would swarm with flying cars by 2019.) So it's anyone's guess what you'll be driving a couple decades from now.

    144 comments | about 2 months ago

  • Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba

    An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""

    455 comments | about 2 months ago

  • Upgrading the Turing Test: Lovelace 2.0

    mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF). He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."

    68 comments | about 2 months ago

  • Google Announces Image Recognition Advance

    Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."

    29 comments | about 2 months ago

  • US Intelligence Unit Launches $50k Speech Recognition Competition

    coondoggie writes The $50,000 challenge comes from researchers at the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence. The competition, known as Automatic Speech recognition in Reverberant Environments (ASpIRE), hopes to get the industry, universities or other researchers to build automatic speech recognition technology that can handle a variety of acoustic environments and recording scenarios on natural conversational speech.

    62 comments | about 2 months ago

  • Robots Put To Work On E-Waste

    aesoteric writes: Australian researchers have programmed industrial robots to tackle the vast array of e-waste thrown out every year. The research shows robots can learn and memorize how various electronic products — such as LCD screens — are designed, enabling those products to be disassembled for recycling faster and faster. The end goal is less than five minutes to dismantle a product.

    39 comments | about 2 months ago

  • Magic Tricks Created Using Artificial Intelligence For the First Time

    An anonymous reader writes Researchers working on artificial intelligence at Queen Mary University of London have taught a computer to create magic tricks. The researchers gave a computer program the outline of how a magic jigsaw puzzle and a mind reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.

    77 comments | about 2 months ago

Slashdot Login

Need an Account?

Forgot your password?