Kernel DBus Now Boots With Systemd On Fedora
Portability also means giving up on system-specific optimizations and features. Some people have decided that Linux's market share means it's time to bank on those optimizations. Why not?
Billion Year Storage Media
Indeed, but what information would you choose to store?
MagicPlay: the Open Source AirPlay
There's already a competing open standard.
It's what I use with my android devices (via BubbleUPNP), XBMC and my Squeezebox.
Most precise measuring tool I've used ...
What was it?
Book Review: The Human Division
It isn't useful on such a trivial example, but add in pointers...
int * func(char* a, char* b);
func (char *a,
(or better elaborate examples I can't be assed to come up with for a /. comment) ... and the milliseconds and frustration saved in parsing function declarations starts to add up
Ask Slashdot: Best Science-Fiction/Fantasy For Kids?
I concur on Janet Asimov's books. I actually read those in 7th or 8th grade, after I had read many of Isaac Asimov's other books. I was just looking for anything with "Asimov" on it. I recall finding them a bit juvenile, but still a good enough read that I still remember them 20 years later!
Ask Slashdot: Why Aren't You Running KDE?
I've long been a KDE user, switched to it in the KDE 4.1 days and never understood why people were so unhappy about it. I found it to be slick and useful, despite the regular problems with the NetworkManager applet in Debian Unstable. I just used the Gnome applet instead, which fit without a hitch.
Last year, finally frustrated enough with juggling between the windows of my various terminals and editors, I chose to give a tiling window manager a good try, and spent some effort on the ill-named Awesome (seriously, how do you SEO that?).
Though it's certainly not aimed at Joe Six-Pack in that you actually have to edit the Lua-based config file to configure it yourself, I found it extremely powerful and perfectly suited to my needs. The "tag" system to organize your window is supreme in allowing me precise control over which windows to display.
I discovered that I didn't have a use for all the frills of Gnome and KDE, except for USB-key and Wifi network management which are both accessible from the CLI anyhow (see udisks and nmcli). ... does this mean I've turned into a greybeard?
Emacsy: An Embeddable Toolkit of Emacs-like Functionality
I've been using CScope in Emacs for about a year (in fact, I added the entry to ascope.el on that wiki page you linked to), and I've recently switched to Semantic from CEDET and GNU Global.
Sadly, the Emacs Code Browser (ECB) linked to from the CEDET page seems to be broken for recent versions of Emacs and CEDET and unmaintained.
While I dislike Eclipse for bloat and difficult extensibility, I have yet to decide whether Emacs has caught up with it for code browsing.
Record-Setting 100+ T Magnetic Field Achieved At Los Alamos
I used to work next to the french Laboratoire National des Champs Magnétiques Intenses (Powerful Magnetic Field National Laboratory) and was lucky enough to visit it once during the yearly Science Day (why don't we have this in the US?).
They claimed they had the second most powerful magnets in the world, IIRC behind the Fermilab, at about 32T (again, IIRC). Note that this is a sustained magnetic field, not transient as the OP's record. (still, hitting 100T without destroying the magnet is one hell of a feat! Now if only we could find a source of power to sustain such a field...).
32T is extremely high, more powerful than any natural magnetic field on Earth (according to WP, the Earth's field is about 25uT at the equator to 65uT at the poles). The most powerful permanent magnets (rare-earth) can achieve a little under 1T, and good luck getting that magnet off a piece of steel. 32T is achieved only in a space about the size of 2 coke-cans at the center of a large cylindrical apparatus that is the concentric electromagnets. But even at such a strength, the fields we make are dwarfed by stellar and interstellar magnetic fields, that have been calculated to reach hundreds or thousands of Teslas.
Fun facts: they run the magnets at night, when power is significantly cheaper. They have big banks of capacitors and batteries for spare surge power. The (classical) electromagnets aren't built by spooling wire on a tube, because wire isn't thick enough the sustain the kind of current that goes through. Instead they take a thick copper tube that they slice in a spiral and insert an isolator in the spacing.
Their most powerful magnets were formed of a core superconducting electromagnet surrounded by standard electromagnets. The cost of superconducting materials is what prevent them from making more powerful stuff.
But despite all that, I'm still not sure what kind of experiments require such powerful magnetic fields. Such awesome engineering, so few applications...
Why the Raspberry Pi Won't Ship In Kit Form
The fact that they won't deliver in kit isn't news*, it's more interesting to know that they have HW-accelerated versions of MPEG4 and H.264 (and only those), and that all these libraries are closed source.
Furthermore, claims that they have the fastest mobile GPU are fluff: we only have the subjective word of someone who worked on it, not a neutral 3rd party, and it'll be caught up by someone else soon anyhow.
Finally, I'm going to advance that any complaints about the nvidia binary driver are going to be small fry compared to Broadcom's drivers.
*it's just not possible to hand-solder BGA packages. At best you'd need a reflow oven, and *that's* still tricky with the sizes involved here.
Site Offers History of Torrent Downloads By IP
From the "Contact Us" page (which, among other things, lists a postal address in an Antarctic research base):
This site is a joke. But its data is not.
San Francisco Team Wins DARPA's De-Shredding Contest
You're right, s/knowledge/artifact/. People have an attachment to the physical objects long after it's become obsolete, which is why I still buy hardcover books in this age encroaching e-books. The novel explores these themes well.
San Francisco Team Wins DARPA's De-Shredding Contest
Vernor Vinge's 2006 novel Rainbow's End explained how a library was being digitized by shredding all the books, thus destroying the analog knowledge.
One step closer...
Physicist Uses Laser Light As Fast, True-Random Number Generator
A while back, the Simtec Entropy Key was making the rounds among Debian Devs, and claims to be exploiting quantum effects in the P-N junctions to be a true RNG.
They seem serious and I tend to trust paranoid Debian developers' opinions, but ultimately I don't have enough knowledge myself to make a confident judgment call. I'd be curious about more opinions.
Belgium To Give Up Nuclear Power
France has a power plant near Givet, which is situated in a "peninsula" of French territory going into Belgium. That's going to be pretty convenient when Belgium needs to buy massive amounts of power from abroad (hint: Belgium is very poorly endowed for hydro/solar/geothermal energy)
Ron Paul Suggests Axing 5 U.S. Federal Departments (and Budgets)
"Cuts of this scale will also be accomplished by a Paul Presidency abolishing the Transportation Security Administration and returning responsibility for security to private property owners, abolishing corporate subsidies, stopping foreign aid, ending foreign wars, and returning most other spending to 2006 levels."
Source, his campaign website
I'll scream bloody murder for abolishing the Dept of Education and Energy, but I can see where Ron Paul-supporters are coming from.
Google+ Loses 60% of Active Users
As others have commented, Facebook probably has less than 40% active users. But that's not what keeps me on G+.
I use it as a sort of augmented twitter, Following a bunch of science bloggers I find interesting (Shared Circle). It started out as a small list from Maggie Koerth-Baker, the science blogger at BoingBoing, and slowly accumulated more people through recommendations (network effect!).
Nowadays, Facebook is for the silly friends' stuff, but G+ is slowly turning into a major science news source populated by authors I respect.
Looking Beyond Detroit For Engine Innovation
So we’d have to retool 320 machines. Is your change that good?
Perfect illustration of why we're resistant to change. And then some new company comes up with that change embedded in their process, and trounce the old one. Then the cycle repeats.
Amazon Kindle Fire Surfaces
No wireless. Less space than a nomad. Lame.
My point is that maybe the features that the Kindle Fire is missing aren't worth it for the market they're aiming at. Kind of like the iPod in its day.
New BIOS Exploiting Rootkit Discovered
I really, really hate what Gigabyte does with their BIOSes, considering their BIOS backed itself up on the end on some of my disks, changed the OS-visible size of the disk using Host Protected Area (HPA), squashing the mdraid metadata that was happily living there.
By the time I understood what was happening, I had had 3 of my 6 RAID disks screwed, as I had swapped the disks around ignorantly thinking it was some controller error.
That feature was not advertised, and that version of the BIOS had a bug where this feature didn't properly detect which disks it could accomplish this on (it only looked for NTFS/VFAT partitions, natch) and could not be disabled. While I can understand the purpose and usefulness of the feature, releasing with such a bug has made me swear off Gigabyte.
For the reference, it was a GA-P35-DS3, with BIOS F12.
What is "hard" ?
What are the qualities of a given task that make it "hard" ? Why are some tasks hard for some people, easy for others?
I propose that a "hard" task is one where I will have to overcome discouragement more. A hard task will require more motivation. The difficulties to be overcome include boredom, physical tiredness, and indecision, and self-doubt.
It is easy for me to write code. It is so difficult for me to seek a new job that I haven't done it in two years.
(post to be fleshed out)
Similar interests or behaviours, and biking in the snow
There's been a lot of snow in the town I live in, more than I've ever seen since I live here. I still go to work by bike.
I've noticed something: the most difficult place to bike on is where other people have left tracks before. The bike slips and shifts within the other tracks as I move through them, and I can't seem to properly follow exactly another track. The small differences in movement in direction make it a much more difficult and bumpy ride than if I were to bike through virgin snow.
I've noticed that when people have similar interests or behaviours, the small differences between them can cause a great deal of friction. It's not "how much we have in common" but, "how much is different".
Do you see the link between the two?
The Wise Man on the Mountain (on teaching)
(quickly whipped up metaphor on the difficulty of teaching)
There once was a wise man who lived at the top of a mountain. He knew a Great Truth, that was bestowed upon him by Greater Powers. He would freely provide this Great Truth to anyone who sought him.
Pilgrims regularly came to him, seeking to learn some truths, and he would bestow his Great Truth to them. He would tell them: "Mass is Energy!". But the pilgrims would not understand him. They would think "wood which is mass can be burnt to give off heat, which is energy, that is what he meant!" or "energy of some mass is related to the square of the speed of the mass, but mass itself is not energy!". In the end, they thought him crazy.
One day, a man named Albert heard the old man's Great Truth, and worked on it for years and years. One day, after some of his hair had become white, he said "E=mc2!", and understood what the old man meant when he said "Mass is Energy!".
It had taken a very special man to start from what he knew to the understand the Great Truth, but it was not the wise man on the mountain. The wise man on the mountain could not bring pilgrims up to his great truth.
The role of a teacher is not to bestow Great Truths to his students, it is to bring those students to see those Great Truths by themselves. Only then will they understand.
Aging, synthesizing, simplification
Lots of old respectable people seem to go on wild tangents. See: Tipler, Freeman Dyson, Linus Pauling.
As we age and learn, our brain culls unneeded neural connections. This is a natural and required process. Too many neural connections have been correlated to neural pathologies (autism?).
I wonder if too *much* culling makes people oversimplify their understanding of things, as their brain tries to fit new situations into their existing schemas?
Each level of indirection added to a program makes it more generic, but also makes it twice as complicated to understand.
Make it too generic, and nobody will understand it.
Android (this is not the Linux you are looking for)
There's a lot of excitement around Android. It is often mentioned that Android is a linux-based stack.
That is correct. What I tend to infer from it is not. When I hear "linux-based", I naturally think of all the ecosystem around it (this is something Stallman has been fighting against with the term "GNU/Linux"). In this case, it is quite specifically Linux-based as opposed to GNU/Linux-based, because nothing in the Android software stack comes from GNU.
Not the libc. Not the graphical system, though it's based on the framebuffer. Hell, even its Java isn't standard, in the sense that it doesn't have the standard Java Class Library (not even the CLDC/MIDP subset).
Porting anything to Android from a traditional linux system is a big undertaking, akin to porting a Windows or Mac app to Linux, or vice-versa.
Linux is cool for Android because it's GPL2, not v3, so manufacturers and vendors can lock it down TiVo-style. Had Linux been GPLv3, Google would have shopped for a kernel elsewhere.
I think Android is a cool undertaking and a step in the right direction of openness, but OpenMoko, Moblin, Maemo it ain't.
Like thousands of geeks before me, I'm learning to use emacs. It is an extremely powerful editor. I've been using other, easier editors previously. There are a couple I like, notably notepad++ on windows, and kate on linux.
Though I am impressed by all that emacs can do and its custumizability, there is one thing I hate: it's user interface.
Emacs was first created by a bunch of geeks (some long-forgotten dudes like a certain Richard M. Stallman), working on powerful mainframes (not exactly, but stay with me here). This was before I was even born.
There computers were quite different in usage from what we have today. There were no user interface guidelines set in stone.
Since then, there has been stuff like IBM's CUA (Common User Access), or Apple's HIG (Human Interface Guidelines).
Emacs does not follow them.
Emacs calls "windows" what we now call "frames". And vice-versa. A "selection" in text is a "region". "cutting" is called "killing". "paste"? "Yank". There's this notion of "buffers", where everybody now has "open files" (What is it? "buffers" aren't necessarily "files"? Well it's close enough!).
Wanna close a file? Woops, ctrl-w just "killed" (cut) your selection. Wanna ctrl-x a piece of text? Woops, that's the all-important primary shortcut.
Honestly, Emacs has all the features a modern editor provides, and enough extra to cure cancer. However, it could use a much-needed update to it's UI terms and shortcuts. It's not impossible, most of its features map cleanly to what exists elsewhere, under another name (as mentioned previously).
It would open emacs up to a million young motivated geeks. Let the old geezers die off.
Moral evolution of our reaction to violence
There are three stages in the reaction to violence:
1) "Strike your enemy seven times as hard as he strikes you." The basic idea is to discourage violence by warning you that the revenge will be much worse. It has the huge flaw of encouraging violence escalation, and turning a percieved slight into a full-scale war (as illustrated in this text).
2) "An eye for an eye" (Code of Hammurabi, Law of Talion) This tries to fix the escalation of violence. Problem is, we humans are imperfect, and will perceive intent where there was none, and easily fall back to the vendetta stage.
3) "Turn the other cheek" (Jesus, Gandhi) This tries to completely defuse the cycle of violence. Some people perceive it as a sign of "weakness".
Culturally, I think we still haven't learnt our lesson from that guy who got nailed (literally) for suggesting we be nice to each other, for a change. We morally remain at the second stage.
LED Rubik's Cube
I've been wanting to get back into hobby electronics for a while. I particularly wanted to make something with multicolored leds, mainly because of the "Ooh Shiny!" factor, and my love for smooth ambient lighting.
A slashdot comment recently gave me the idea of making a LED Rubik's Cube: a cube where each facet on a face is illuminated from the inside by a LED giving that facet its color.
It's immediately obvious that I couldn't build myself a functional Rubik's Cube: getting the wiring to work while the pieces move around is a problem that might not even be solvable. So the next idea is to have multi-colored LEDs for each facet, which change color to emulate the movement of the faces.
But that's just the ideal goal. I would already be very happy if I managed to make a not-too-large cube where each face is an array of 3x3 RGB LEDs, all independantly controlled.
Note that the center facet of each face does not change color.
The number of RGB leds necessary to make this is 3x3x6 = 54 leds. As we have three control wires for each LED, that gives us 162 wires to control!!
That's unimplementable if we want to be able to drive each LED directly, but through Charlieplexing, we can use a controller with 14 pins to control all that. (actually, you can control 182 leds with 14 pins).
The problem with Charliplexing is that one can only turn on a single LED at a time. To give the illusion of multiple leds, we exploit human persistence of vision and turn each LED rapidly in sequence. Problem is, the more LEDs we have, the less time each spends turned on, and thus seems dimmer. To complicate things, we already have to play with the RGB LED's duty cycle to manage the color!
An alternative to Charlieplexing the entire cube from one controller is to 'plex each face separately, under control of a minimal microntroller connected to a master through a serial port such as I2C. (how about cellular automata reacting automatically to their neighbours!? OK I'll stop). In this case, each face only has to 'plex 27 leds (9 RGB leds), which is manageable with 6 free pins, plus 2 for I2C, plus two for power => 10 pins. However, the PIC and (AFAICS) AVRTiny come in either 8 or 14 pin variants, so 14 pins will have to do...
Another possible trade would be to remove the bottom face from the cube.
It looks like some people like to play around as well:
I love C. It's my favorite language. Of course, as I work in the embedded systems field, I'd be very unhappy indeed if it were otherwise.
I love how it is a simple language (it's been said that it is a language that you can "hold in your head in its entirety", unlike C++). I love how the original reference book is the ultimate example of a well-made and concise manual. I love how it is universally supported. Most of all, I love how it maps to assembly in a pretty straightforward manner*.
But I'm still young, and I guess it's a sign of maturity that I start to realize its shortcomings and quirks.
- the difference between putting an include between or between "" ends up being arbitrary. Between carets, it is supposed to be a system include, and between quotes, it should be a user include. But with all the add-on user-libraries we install system-wide nowadays, the difference ends up being arbitrary and useless.
- multiple inclusions. A.c includes B.h and C.h, but B.h also includes C.h. Thus the compiler will complain that stuff defined in C.h is redefined. Hence the wonderful protections against multiple inclusions (#ifndef FOO\ #define FOO\ \ #endif FOO) that exist in header files around the world. Why this isn't yet handled automatically at CPP's level is beyond me.
- Actually, the whole include system is pretty broken. It stems from a technical problem, of course, but having to run around trying to find the include file that defines the function you need is unnecessary, nowadays. We've got powerful source-navigating engines, why aren't they included in the compilers?
- No multiple comment levels. When debugging, it is really useful to comment out a section of code, often increasingly larger. Or to comment out a section that includes comment. Nope, comments end at the first */, so not possible.
- why do people insist on hiding structs and enums inside opaque type names such as ts_thingie instead of a nice struct thingie? I find the latter so much clearer when going over code.
- portable code is a bloody mess full of #ifdef. Good luck trying to follow the flow of your code in that.
- TODO: put more stuff here.
Conclusion: this is just a rant that I needed to breathe out. Each problem I raised exists for a good reason and hasn't been universally solved for other good reasons, which I'm too lazy to research and give here (besides, that's not the point of a rant).
However, it's good spirit to know the limitations of the tools you're using.
I've got to do some Java some day.
*depends on the optimization level, of course, but that I can find the correspondence pretty easily most of the time never ceases to satisfy me.
writing vs publishing
I realize that this is the third post in a couple of days. I know that nobody reads this, but at this point it doesn't matter.
Because what's important to me is the need to express myself. That this doesn't reach anybody, that I get no feedback, is not important to the psychological mechanisms that make me write this. The only act of writing it down is enough to satisfy the urge. Anything else is bonus.
In fact, I believe that the important aspect of writing this is just to articulate my thoughts. As long as they stay in my mind, they are a confused mess. Only by writing it down can I see whether they make any sense.
Maybe I'm just fooling myself. Of course, if it gets to the point that I have to write it down, the thoughts are already articulate enough.
GPL vs BSD
Still today, in mid-2007, more than 18 years after the publication of the original GPL, and about the same time for the BSD license, people are still arguing about whether the GPL or BSD is the most free (as in speech) license.
Let's make is simple: BSD allows you to do much more with the code, at the expense of the freedom provided with subsequent modifications of the code. GPL restricts the way you can use the code, so that subsequent versions can enjoy the same freedom. GPL really is as free as can be, provided that freedom stays intact.
So there is more freedom with version N of BSD code than there is with GPL code, but you have no guarantee for modification N+1 of BSD code, whereas GPL code stays just as free.
Ask HP-UX users if they enjoy the same liberty with their system's code as HP did. Then ask GNU/Linux users if they enjoy the same liberty as the kernel hackers.
This has an impact on the quality evolution of software released under those licenses. With a BSD license, you can never insure that the original code will enjoy improvements made by licensees. You can always, technically, bring those improvements back to the original GPL code.
BSD code will be used far and wide. You'll get nice warm feelings to know that your code was used in the PS3, for example, though you'll never get to try it yourself, as the modifications will never come back to you. If you'd rather have that nice warm feeling, knowing that you helped out other people out of pure altruism (even if it enabled them to make billions of dollars and they didn't give a cent back to you. Yeah, I'm flaming), then go BSD.
GPL ensures that everyone continues to enjoy the same liberty. As such, all changes are public, and can make their way back to the original source. So if you want your code to continue to improve from other's contributions, go GPL. Of course, this'll be at the expense of how far your code will spread, but otherwise that further spread won't benefit you (or your code). That warm feeling won't be as strong, but your code, your baby, will be better off.
If you really believe in freedom, what's the use of a freedom you can't ensure will last?
And by the way, downloading that pirated version of Spiderman 3 is the same copyright violation as using GPLed code in a proprietary application.
 at one point Windows' TCP/IP stack behaved identically to a BSD's, implying identical code.
 About libraries: Not all GPL code is stand-alone. Some of it is part of libraries, and in this case, the GPL does strongly limit the use of the library in more ways than for stand-alone software. Libraries are components of larger software, and they are worthless by themselves. The GPL restricts the use of a library in that you cannot use it in non-GPL software. It is viral in the way that to use a GPL library, the whole software has to become GPL. The LGPL license exists so that you can still have a library under a GPL-like license, but it doesn't extend to the application using it.