Ask Slashdot: Gaming With Only One Hand?
I've read that ECoG's are being tested in prosthesis, which is very cool. I would think that a less invasive alternative (BCI is one, correct?) would become available before "brain implants" would be offered to the general public. The challenge as I understand it is regarding the sensitivity of the sensors, as well as the signal-to-noise ratio. Is there alot of development in this area, or is it limited to NASA?
Well in theory any Brain-Computer Interface is a BCI, whether it is based on EEG, ECoG, fNIR, or another technology, but I understand what you are asking and no, EEGs and fNIR are not considered invasive. At worst you are spending a lot of time for preparation (30 minutes or more when many electrodes are involved), getting conductive gel in the user's hair, or placing them inside a large machine for the duration of use.
Sensitivity of sensors is important, but removing noise and artefacts from ambience electrical devices and simple muscle movements (eyeblinks, twitch response, talking, etc.) are the greater issue. In addition you have to ensure that sensors are refitted with as much precision as possible in between sessions (a different position may yield slightly different signals or least the signals you used last time may be closer or farther away now), not to mention physical changes in the brain including neuron migration as a result of neuroplasticity - the actual act of training yourself to use the software produces changes in the brain to which the software needs to continue to adapt.
If you are interested to learn more, you may want to check out BCI2000 and OpenViBE, both of which are Open Source and produced by extremely well respected academic institutions.
Ask Slashdot: Gaming With Only One Hand?
hmm. Not sure why that posted anonymously, but please feel free to message me if you have any questions about BCI.
Shopping Center Tracking System Condemned by Civil Rights Campaigners
...and tracking by camera?
There's already camera systems in use in retail stores which measure customer flow, calculating dwell time in front of specific products, navigation between isles and so on.
Here's one example which came up in a quick Google search.
This sounds like applying that same principle within a mall to track which store a given person/type of shopper visits on a single trip.
Just like the stores, the malls already have security cameras in place, recording your visit. All they would need to do is analyse it in a different way. No one is going to get very far claiming malls or stores can't have security cameras. Are there existing laws which dictate how that footage is used?
You or I might not feel comfortable with these sorts of tracking systems, but at least with the radio system we can choose to turn off our phones.
Intel Drops MeeGo
What options does this leave for Qt-based development on embedded platforms?
Maemo on the N900 felt like the right direction with Nokia backing Qt, especially with projects like PySide created soley to offer a LGPL-licensed Python wrapper available to commercial developers (as opposed to PyQt). This permitted a single codebase to target desktop and mobile/tablet environments using a pleasant and completely open toolchain. MeeGo was set to carry on with Qt/X11.
But according to MeeGo's updated website, "We believe the future belongs to HTML5-based applications, outside of a relatively small percentage of apps, and we are firmly convinced that our investment needs to shift toward HTML5."
Reading Terrorists' Minds About Imminent Attack
This is really interesting as Rosenfeld himself has previously railed against other neuroscientists for commercializing P300 based lie detectors with claims of 100% accuracy:
Simple, effective countermeasures to P300-based tests of detection of concealed information - J. PETER ROSENFELD,a MATTHEW SOSKINS,a GREGORY BOSH,a and ANDREW RYAN
"It seemed timely to investigate countermeasures to ERP-based tests also because although there have been many laboratory studies claiming 85-95% accuracy, only one field study has been published, but it reported approximately chance accuracy (Miyake, Mizutani, & Yamahura, 1993). Nevertheless, one user of these methods claims 100% accuracy and is presently attempting to commercialize them (see http://www.brainwavescience.com/). Finally, the ERP approach has now surfaced in popular novels, for example, Coonts (2003), as a foolproof method."
"It is noted that the subjects used by Farwell and Donchin were paid volunteers, including associates of the experimenters. Our presently reported study uses introductory psychology students as subjects, more like the subjects one might find in the field in the sense of relative lack of motivation to cooperate with operators, and perhaps lower intelligence."
The above is the original peer-reviewed paper, this review (also by Rosenfeld) below is more recent and concise:
Leaked MS Presentation Shows App Store Plans For Windows 8
The biggest example was probably how they handle multiple size screens on an extended desktop: click through the dialog once, and it remembers. The next time you connect that particular screen, you get your nice big desktop back. The Linux equivalent is a full workday worth of xorg research, and God help you if you want two different profiles (like laptop+big screen and laptop+projector).
Actually my netbook does this under Fedora 12 without issue or any special configuration.
The video chipset is Intel based (lspci says "945GME"), so it uses the fully Open Source X.org driver, and perhaps that helps.
When I plug in a screen to my netbook at the office, it recognizes the monitor ID, sets it to maximum resolution, and correctly places it relative to where the netbook sits on my desk. If I close the netbook lid and the screens go to sleep, I can unlock the system without opening it (running Synergy) and the desktop area automatically resizes to just use the monitor. If I then open the lid it resizes again to use both the netbook screen and monitor again, with the same resolutions and relative positioning as before.
The same thing happens when I take the netbook home - although there it recognizes a different monitor is being used, with a different resolution and relative position - all of my settings are remembered without my having to do anything manually. And I should probably say all of the original resolution and layout settings were done with the default, graphical tools, not by having to drop to the command line or hack any special scripts. Hell, there's not even a "xorg.conf" text file on the system, everything is auto-detected and launched automatically through the boot process.
Except for the Synergy part this is all out-of-the-box and "just works." Only caveat is I can't run Compiz at the same time because it doesn't handle the layout/resolution changes properly.
Scientific R&D At Home?
if those are barriers, then add a 6th: INABILITY TO PERSIST IN PROBLEM SOLVING. There are simple solutions to all of them, and some have several.
Sure, that's why I suggested "hurdles" to expect (as opposed to "barriers" to success), and included some suggestions - such as purchasing access to research papers, being certain to collect an unbiased sample group, and when lacking credentials finding a party which has them to review your work.
Oh my sweet variance. d00d ... I've cracke3d all of them despite being able to walk thru (ie. getting published with no affiliation and without saying I have a PhD).
Incidentally, peer review helps with spelling and grammar too. (c:
Scientific R&D At Home?
After spending the last several months learning about and experimenting with EEG in an informal environment, I would say the largest hurdles you will encounter which are likely to apply to any field of science are:
- Lack of access to high quality, peer-reviewed research - Unlike Open Source where one can simply download large and complex software (such as the Linux kernel) to examine in depth how it all works, or search large online repositories to discover discussions and explanations around key areas, scientific research papers typically have restricted access. You can find most papers online, but expect to pay upwards of $35-$50 USD per paper with only a brief paragraph-long abstract to help you determine if the information within is relevant or useful.
- The "easy" discoveries have already been made - EEG research specifically goes back to at least 1875, though many of the major discoveries still referenced today occurred in the 1960's and 1970's as the equipment got better and more sensitive. All of the classical realms of science have been around much longer of course.
- Lack of access to research-grade equipment - One way to push the boundaries of the known is with improved equipment which can take more accurate readings, thus providing information which may not have been previously explored. Again referring to EEG specifically, although various consumer-grade hardware has been released recently, the quantity and location of sensors does not match locations used by current research and the signal-to-noise ratios of the sensors themselves are quite low by comparison.
- Lack of access to large, unbiased test groups - If you lack the equipment to explore new depths, you might be able to explore new applications of known phenomena instead. However this requires access to statistically significant test groups, or in other words you can't simply do all of your experimenting on yourself or family and friends (and pets!). You need unbiased subjects and for all tests to be carried out in a carefully controlled environment if you want your results taken seriously. Which brings up the final point:
- Difficultly in presenting your results - If you don't have a PhD in your field of research, chances are you will have difficulty being taken seriously, especially if your work leapfrogs or even contradicts established work in the field. You will likely need to find another party with credentials who is willing to review your work and possibly attach their name to any publications which result. Setting the barrier to entry somewhat high does help to keep out the "kooks" after all.
All that said, don't be discouraged and best of luck with your chosen field of research. If you do decide to turn to EEG feel free to contact me directly for more information or perhaps even to collaborate.
Controlling a Robot With the Emotiv EEG Headset
Interesting. I posted much of the above information to the original article, and the content of the post now appears to have been censored:
April 27, 2010 01:17:59 GMT
Controlling a Robot With the Emotiv EEG Headset
Almost all of the degrees of freedom come from head motion and muscle artifact. EEG is very sensitive to facial muscle artifacts, and when you actually record EEG the patients have to keep very still.
The larger problem with the Emotiv EPOC headset is that the EEG sensor locations it provides do not match up to where "real" Brain-Computer Interface (BCI) research is focused. So even if you wanted to do control by "pure thought" alone the best-known areas of the brain where these signals are located are not measurable by the Emotiv EPOC.
Electrode placement is based on an international standard called the "10-20" system:
Most BCI applications focus on "imagined" movements around the right arm or hand, left arm or hand, and feet. The parts of the brain which produces electrical signals when neurons related to these extremities fire are located in the C3 and C4 sections of the top of the scalp in the diagram at that URL. Another important location is the "Cz" sensor at the exact top of the crown.
Unfortunately however, the key Cz, C3, and C4 electrode locations (going by the 10-20 scale) right/left/feet motor control are not available on the Emotiv hardware. Instead their hardware provides electrodes in the following 10-20 locations:
AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4
My understanding, based on discussions with Emotiv, is that they designed their headset with as many unique channels of information as possible, at the best price/feature ratio, which would fit the most number of potential users in a one-size-fits-all form factor. This last restraint prevented them from Cz, C3, and C4 because the exact locations from user to user were not consistent enough to be relied upon in a consumer setting (their target market). Locations for an adult would not be the same as an adolescent user, and getting the locations lined up precisely is "too hard" for the casual or non-technical public.
So in other words, if you want as fine-grained control by thought alone as the current state of technology allows, you'll have to wait for a updated EEG headset model from Emotiv or another manufacturer - or of course learn how to build your own from the .
Controlling a Robot With the Emotiv EEG Headset
Here is a similar project using the same Emotiv EPOC headset to control LEGO Mindstorms robots via EEG:
YouTube video link
(disclaimer: yes, I'm the project lead)
How the iPad Is Already Reshaping the Internet (Sans Flash)
This has nothing to do with the iPad. Once again, Apple is getting the credit for something that was already happening in the industry.
Apple, in this and related instances (such as the iPhone), can be compared to crystalline nucleation:
Examples of nucleation:
Pure water freezes at 42C rather than at its freezing temperature of 0C if no crystal nuclei, such as dust particles, are present to form an ice nucleus.
"The process of nucleation and growth generally occurs in two different stages. In the first nucleation stage, a small nucleus containing the newly forming crystal is created. Nucleation occurs relatively slowly as the initial crystal components must impinge on each other in the correct orientation and placement for them to adhere and form the crystal. After crystal nucleation, the second stage of growth rapidly ensues."
...or in other words, all of the properties necessay for tablets to take off may have existed and been available for some time, but if nothing else Apple has mastered the ability to seed the market with the right mixture of elegant design and hype to cause the general public to take notice, just as they did to the smartphone market with the introduction of the iPhone.
How the iPad Is Already Reshaping the Internet (Sans Flash)
Open source people will do a port of open office for maemo/andriod eventually in a couple of years as only two people will do it. Yet Apple is shipping it today.
Just FYI, I've been running Open Office on my Maemo 5 n900 for months, there's a native ARM port with for Debian which runs directly, has its own menu icon, etc. Its all handled by the native package manager, so no stress there.
Now granted, it launches slowly and the only thing I tend to use it for is converting email-attached .doc files to PDF for more convenient display under the lightweight native PDF viewer, but its been available for some time.
If I needed to type out a significant amount of text for a professional document, I'd do it on my netbook or desktop PC, either of which is better suited to the task - as opposed to say a device with no physical keyboard (or tiny keys).
Learning Python, 4th Edition
I sincerely hope that this version is better than the first edition, although anything short of a random re-arrangement of pages would serve as an improvement. The first edition actually delayed my initial use of Python by about a year and a half. I had heard wonderful things about the language so I figured, "Ah, an O'Reilly book!" Big mistake.
Wow, I'm quite surprised actually, I had exactly the opposite experience with the first edition of "Learning Python."
I distinctly remember picking up the book in '99, reading the first three chapters to get introduced to the language basics, then writing my first web-scraper to pull weather forecasts off weather.com and forward them as emails, arriving on my handset as an SMS message (AT&T was running a free email-to-SMS gateway at the time, and didn't charge to receive the messages). I think I skipped ahead to chapter 11 or so to find the code for reading html as text from a URL, as opposed to a local file.
I had never written a tool which perform network lookups and was really impressed with the simplicity of the language and the book. The progression was from the very general to the very specific. The first three chapters were a history and basic introduction to the relatively unique concepts such as whitespace handling and how to deal with strings, as well as how Python handles common stuff like while and for loops. If I recall correctly it stepped into classes and objects after that, then proceed into specific libraries.
I've been doing professional coding in Python ever since, and always recommend "Learning Python" as an introduction to newbies.
My only disappointment in fact was that the size of the book has grown so much in the course of the last few editions.
New Touchscreen Technology Like Writing On Paper
The first iteration is geared around media consumption.
Perhaps a second line will integrate technologies like this for media creation.
Either way expect something like it running Adroid.
Willow Garage To Give Away 10 Open Source Robots
The hardware specifications alone are pretty impressive:
The PR2 robot has two eight-core i7 Xeon system servers on-board, each with 24 GB of RAM, a 500 GB internal hard drive, and a 1.5 TB external removable log drive. The computers and most of the sensors communicate over a 16-port gigabit Ethernet hub with a 32-gigabit backplane. The robot also has an on-board, dual-radio router that can be bridged into a WLAN, as well as a secondary, stand-alone access point for laptop or smart phone access.
The PR2 ships with sensors in the head, arms, and base. The head contains two stereo camera pairs coupled with an LED pattern projector, a 5MP camera, a tilting laser range finder, and an IMU. The forearms each contain an ethernet-based, wide-angle camera, while the grippers have three-axis accelerometers and pressure sensor arrays on the fingertips. The base has a fixed laser range finder.
That's a fair bit of grunt to throw at the OpenCV libraries, which is listed under their Supported Projects in the Software section. No surprise either, Willow Garage has taken over hosting the project from Intel.
Scientists Use Quake 2 To Study the Brains of Mice
...they need to hook up the screen to a camera feed from the flying beetles earlier this month.
Let the mice steer the beetles!
Linux Games For Non-Gamers?
I've not really played PC games since the Doom era so I'm really out of touch here. I don't have a real gamer box, just a simple video card. What do Slashdotters think I should try? A simple FPS or some type of networked game would do.
Sounds like you've missed a fair few generations of games then.
Try giving Enemy Territory a go.
Quite addictive in its time and a nice cooperative element to online play.
It was released back in 2003, and runs quite well on Linux. You did mention only having a "simple" video card but odds are better than even your system has sufficient support - even basic integrated video chipsets tend to have some degree of OpenGL support these day.
System requirements are: 600 MHz CPU, 128 MB RAM, 32 MB OpenGL graphics card, 56.6k Modem/LAN
Its not quite Open Source but it is (and always has been) free as in beer.
AMD's OpenCL Allows GPU Code To Run On X86 CPUs
Microsoft wouldn't allow licensing dual cores on netbooks.
As far as I can tell, that's only regards Windows XP.
See this article (which, admittedly, its talking about a "nettop" box, not a netbook:
...first thing you see is that it runs on Windows Vista - XP under Microsoft's licensing terms for netbooks limited it to single core CPUs.
Got anything which specifically states that other OS's besides XP (which they've been trying to drop support on for a some time now) is restricted regards Dual Core?
To What Age Do You Expect To Live?
Hearing implants are apparently already pretty good. They went from "hear something" to "hear people talk but bit wonky" in a decade or so.
The obvious huge difference here is that the implants are connected to I/O already present in the brain wetware and it's still extremely difficult to pull off.
Your calculator example would require completely artificial interface layer to the brain ...
Subvocalizing would be relatively straightforward to do, thought. You could have conversation with your implants hooked to your auditory nerve.
I'm assuming by "subvocalizing" you mean effecting speech without actually using the human voicebox, then transferring that speech information electronically to a "receiver" which is hooked up to a hearing implant?
So why can't I subvocalize "Computer, what's 63 * 14.69", pass through speech recognition, process the result, and transmit back to my own hearing implant?
In the long term, surely any "interface layer to the brain" would ultimately have to enter human consciousness in order to be perceived anyway. I don't think a wet-wired calculator would cause an imaginary physical calculator to suddenly appear into anyone's head - one needs a mental model for interacting with device. Speech and hearing seems an ideal, or at least achievable goal.
The next step would be to actually be to process an manipulate the visual stream between the eyes and the brain. Perhaps you could "draw" a calculator over someone's visual field, heads-up-display style. I would expect the level of precision and bandwidth would remain out of reach for our lifetimes however, and even then you still need a means of "pressing" the buttons.
Much more likely, easier, safer and possibly effective to display onto glasses - in which case the audio component can be built into headphones at the end of the glasses, and a direction mic could pick up whispering. Why tap into the brain at all? (c: