Consumer Device With Open CPU Out of Beta Soon
I know this is going to be a flamebait. But before you flame me, consider the following: I'm researcher and get paid for what I do. I've released quite a few codes as open source and invented a bunch of algorithms which are not patented and used in many applications (think email spam filter, face recognition, etc.). And I've worked in industry and academia. For almost two decades. So I know both open and closed source.
First off, ideas have value. As in Dollar value. Take NVIDIA for instance - they don't have a semiconductor fab, so they send their chip layout to a place like TSMC or Global Foundry or Samsung or any other place to have their files turned into chips. These places are like modern printing presses. If their mask, vhdl or layout information were open source they wouldn't be able to reap the benefit from their investment into building the next generation of chips. Or as a more extreme case, take ARM. They design processor cores and license the microarchitecture to other (possibly fabless) design companies such as Apple which, in turn, tweak the design, add more stuff to it, and then ship it to the foundries. In other words, all the good stuff is in the plans, much less in the actual hardware.
So, designing an open source CPU is probably not going to work. Why not? Well, unlike with software, there's a massive barrier to entry. Talk Millions of Dollars rather than a few hundred to buy a laptop and install some version of GCC on it. Few users can afford this. This pretty much kills the model where many users take advantage of a good idea and share it to make it better. Yes, there are good ideological reasons but most people don't do things for ideology (note the emphasis on most). They do them for fun, profit, fame, convenience, or some other less noble goal.
As for the piece of hardware itself, hmmm, not sure why I would want to buy an overpriced and function limited and incompatible device.
Is Attending a CS Conference Worth the Time?
Attending a conference (computer science or otherwise) doesn't mean much. You get to travel, stay at a fancy hotel (or a youth hostel if your university is poor) and present things. So what! There's that extra line on your CV.
It's worth it, though, if the people attending the conference are experts and you manage to discuss with them. Or if others see your work and build on it. Or if your work gets cited a lot as a result of attending the conference. Or if you manage to start an exciting joint research project. I've been to about 50 conferences so far and have published over 100 papers and the good ones are really worth it.
I'm not so sure about CCSC, though. Beyond that, I'm not a big fan of PhD conferences or sessions. If the work is good, everyone will want to hear it, so it'll be featured in the main conference anyway. If it isn't, having a special session won't help you.
Death Grip Tested On iPhone Competitors
Besides a) attenuation due to hand holding and b) change of the antenna characteristics due to bridging there's a third problem which really exacerbates the first two: the antenna of the iPhone 4G is highly directional. In other words, it matters a LOT which way you point the phone. Sometimes even small changes around it can make a big difference in terms of whether you get data or not.
You can test this out (assuming you've got access to an iPhone 4G) by running a speed test application (there are plenty in the App Store) while holding / pointing the phone in different ways. I can trigger signal loss even without holding the phone. No bumper whatsoever is going to fix that problem and this is plain and simple bad antenna design. I lose a lot more data when streaming radio on the 4G than what the 3G did even though the bandwidth is (potentially) much higher.
Neural Interface for Gaming Getting Closer?
What the guys at NeuroSky are describing is complete vaporware. I work with brain signal data myself and know quite a few people who do. Basically, at present there are two methods that kind of work:
You implant a bunch of electrodes into a person's brain. See Michael Black's work (Brown University) on analyzing this data. You get roughly 30 bit per minute out of this. With some training a bit more. This is done for people who are seriously disabled, i.e. quadruplegics where you implant the electrodes in the motor cortex (useless for people who cannot move their limbs).
An alternative is to use EEGs. They usually come with about 100 electrodes, take an hour to put on and require lots of conductive gel. For instance Klaus Muller's group (Fraunhofer Institute Berlin) does such work. They get up to 20 bit per minute data rates. And yes, you can play simple games (they've got a cool demo of a person playing pong using the electrodes).
The big caveat is that there's just absolutely no way you can put a few electrodes onto your brain and get the information out that the NeuroSky people are claiming. The entire stuff looks really fishy, when you check their homepage http://www.neurosky.com/. Pretty much no information on who does the work, what their technology is, etc.