Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



Linux Needs Resource Management For Complex Workloads

lkcl Re:complex application example (161 comments)

hi mr thinly-sliced, thank you this is awesome advice, really really appreciated.

about a month ago

Linux Needs Resource Management For Complex Workloads

lkcl Re:complex application example (161 comments)

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:


* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

about a month ago

Linux Needs Resource Management For Complex Workloads

lkcl complex application example (161 comments)

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

about a month ago

Pseudonyms Now Allowed On Google+

lkcl legal ramifications of identity verification (238 comments)

i think one of two things happened, here. first is that it might have finally sunk in to google that even just *claiming* to have properly verified user identities leaves them open to lawsuits should they fail to have properly carried out the verification checks that other users *believe* they have carried out. every other service people *know* that you don't trust the username: for a service to claim that they have truly verified the identity of the individual behind the username is reprehensibly irresponsible.

second is that they simply weren't getting enough people, so have quotes opened up the doors quotes.

about a month ago

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:Hardware is hard (71 comments)

Read "hard" as "Expensive as Hell"

That is part of it yes. It requires a wide range of differently experienced people: low level software, high level software, circuit design, assembly, layout, component sourcing, factory liasion, DFt, Manufacturing etc.

Then you need to get them all to work together. And you have to pay them.

... ynow... one of the reasons i came up with the idea to design mass-volume hardware that would be eco and libre friendly was because, after having developed the experience to deal with both low-level software and high-level software, and having done some circuit design at both school and university, i figured that the rest should not be too hard to learn... or manage.

  you wanna know the absolute toughest part [apart from managing people?] it's the component sourcing. maan, is that tough. if you want a laugh [out of sheer horror, not because it was actually funny] look up the story on how long it took to find a decently-priced mid-mount micro HDMI type D [8 months].

  so anyway, i set out to find people with the prerequisite skills that i *didn't* have, offered them a chance to participate and profit. the list of people who have helped and then fallen by the wayside... i... well.... i want to succeed at this so that i can give them something in return for what they did.

about 1 month ago

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:Would it kill you to hint at what Improv is (wa (71 comments)

If only there was some way to get more information, perhaps with a sort of "link" of some kind to a more detailed description.

here is the [old] specification of the [revision 1] CPU Card:

the current revision 2 which i am looking for factories to produce (RFQs sent out already) we will try with 2gb of RAM. this is just a component change not a layout change so chances of success are high.

here is the [old] specification of the Micro-Engineering Board:

that was our "minimal test rig" which helped verify the interfaces on the first CPU Cards (and will help verify the next ones as well, with no further financial outlay needed. ever. ok, that would be true if i hadn't taken the opportunity to change the spec before we go properly live with it!! you only get one shot at designing a decade-long standard.... i'd rather get it right)

this will be the basis of the planned crowd-funding campaign: it's more of a micro-desktop PC:

the micro-desktop chassis is very basic: VGA, 2x USB, Ethernet, Power In (5.5 to 21V DC). all the other interfaces are on the CPU Card (USB-OTG, Micro-HDMI, Micro-SD). however unlike the Micro-Engineering Board, the power is done with a view to the average end-user (as is the VGA connector which means 2 independent screens, straight out the box).

does that help answer the question?

about 1 month ago

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:What was desirable about it? (71 comments)

Open hardware sounds cool, but as others have noted, good hardware design is both difficult and expensive. Considering how rapidly the components advance (CPU/SoC, I/O, displays, etc.),

aaaah gotcha! that's the _whole_ reason why i designed the long-term modular standards, so that products *can* be split around the arms race of CPU/SoC on the one hand and battery life / display etc. on the other.

and the factory that we are in touch with (the big one), they _love_ this concept, because the one thing that you might not be aware of is that even the big guys cannot react fast enough nowadays.

imagine what it would mean to them to be able to buy HUGE numbers of CPUs (and related components), drop them into a little module that they KNOW is going to work across every single product that conforms to the long-term standard. in 6 months time there will be a faster SoC, more memory, less power, but that's ok, because *right now* they can get better discounts on the SoC that's available *now*.

on the other side of the interface, imagine what it would mean to them that they could buy the exact same components for a base unit for well... three to five years (or until something better came along or some component went end-of-life)?

it took them a while, but they _loved_ the idea. the problem is: as a PRC State-Sponsored company they are *prohibited* from doing anything other than following the rules... i can't tell you what those rules are: they're confidential, but it meant that we had to find other... creative ways to get the designs made.

We're in a world where a first generation Nexus 7 tablet sells for $140 or less. At Walmart.

yeah. now that prices are dropping, just like the PC price wars, the profits are becoming so small that the manufacturers are getting alarmed (or just dropping out of the market entirely). those people are now looking for something else. they're willing to try something that might get them a profit. what should we tell them?

anyway: thank you for your post, darylb, it provides a very useful starting point for some of the key insights i want to get across to people.

about 1 month ago

Improv Project, Vivaldi Tablet Officially Dead

lkcl moving forward: next crowdfunding launch (71 comments)

short version: the plan is to carry on, using the lessons learned to
try again, with a crowd-funding campaign that is transparent. please
keep an eye on the mailing list, i will also post here on slashdot
when it begins.

long version:

this has been a hugely ambitious venture, i think henrik's post explains much:

the - extremely ambitious - goal set by me is to solve a huge range of
issues, the heart of which is to create environmentally-conscious
mass-volume appliances that software libre developers are *directly*
involved in at every step of the way.

so, not to be disparaging to any project past or future, but this isn't
"another beagleboard", or "another raspberry pi beater": it's a way to
help the average person *own* their computer appliances and save
money over the long term. software libre developers are invited
to help make that happen.

by "own" we mean "proper copyright compliance, no locked boot
loaders and a thriving software libre environment that they can
walk straight into to help them do what they want with *their*
device... if they want to".

the actual OS installed on the appliance will be one that is
relevant for that appliance, be it ChromeOS, Android, even
Windows or MacOSX. regardless of the pre-installed OS, the
products i am or will be involved in *will* be ones that Software
Libre Developers would be proud to own and would recommend
even to the average person.

by "saving money over the long term" we mean "the device is
split into two around a stable long-term standard
with a thriving second-hand market on each side, with new
CPU Cards coming along as well as new products as well.
buy one CPU Card and one product, it'll be a little bit more
expensive than a monolithic non-upgradeable product,
but buy two and you save 30% because you only need
one CPU Card. break the base unit and instead of the whole
product becoming land-fill you just have to replace the base,
you can transfer not just the applications and data but
the *entire computer*".

it was the environmental modular aspects as well as
the committment to free software *and* the desire to reach
mass-volume levels that attracted aaron to the Rhombus Tech

perhaps unsurprisingly - and i take responsibility for this - the
details of the above did not translate well into the Improv
launch. the reason i can say that is because even henrik,
who has been helping out and a member of the arm netbooks
mailing list for quite some time, *still* has not fully grasped
the full impact of the technical details behind the standards

(hi henrik, how are ya, thank you very very much for helping
with the boot of the first A10 / A20 CPU card, your post on
the mailing list last week was very helpful because it shows
that i still have a long way to go to get the message across
in a short concise way).

the level of logical deduction, the details that need to be taken
into account, the number of processors whose full specifications
must be known in order to make a decent long-term stable
standard.... many people i know reading that sentence will think i
am some sort of self-promoting egotistical dick but i can tell you
right now you *don't* want to be holding in your head the
kinds of mind-numbing details needed to design a long-term
mass-volume computing standard. it's fun... but only in a
masochistic sort of way!

anyway. i did say long, so i have an excuse, but to get to the
point: now that the money is being returned, we can start again
with a new campaign - using a crowdfunding site that shows
numbers, and starts with a lower target (250) that offers more value
for that same amount of money to everyone involved as various
stretch goals (500, 1,000, 2500) are achieved. these will include
casework, FCC Certification, OS images prepared and, most
importantly as far as i am concerned, one of the stretch goals
i feel should be a substantial donation to the KDE Team in
recognition of the help - through some tough lessons if we are
honest - that they have given, as well as the financial outlay
that they've put forward because they believed in what we're

i'd like to hear people's thoughts and advice, here, because this
really is an exceptionally ambitious project that no commercial
company let alone a software-libre group would ever consider,
precisely because it requires a merging of *both* commercial
aspects *and* software libre principles and ethics. the
environmental angle and long-term financial savings are what
sells it to the end-users though.

about 1 month ago

Ask Slashdot: What Inspired You To Start Hacking?

lkcl a Commodore Pet 3032 (153 comments)

1978, aged 8, our school had a commodore pet 3032. i typed in a simple program in BASIC, 10 for i = 1 to 40, 20 print tab(i), i 30 next i, 40 goto 10 and watched the numbers 1 to 40 scroll across the screen. i figured "huh that was obvious, i can do that" and 25 years later i was reverse-engineering NT 4.0 Domains network traffic (often literally one bit at a time) by the same kind of logical inference of observing results and deducing knowledge.

by 2006 i learned that there is something called "Advaita Vedanta" which is crudely known in the west as "espistemology". Advaita Vedanta basically classifies knowledge (there are several types: inference is just one of them), and knowing *that* allows you to have the confidence in your abilities. up until i heard about Advaita Vedanta i was "hacking blind and instinctively", basically. now i know that reverse-engineering is basically an extreme form of knowledge inference. which is kinda cool.

about 3 months ago

Ask Slashdot: What Inspired You To Start Hacking?

lkcl Re:Dark Reign (153 comments)

Anybody here ever play that game?

yeah, me! were you around in 1995-1996 by any chance? in CB1 Cafe in cambridge UK i was the person who discovered that you could put zombies into the underground phase-tunnel vehicles, then sneak behind enemy lines (the underground vehicle could see "up" into one square at a time). i would go looking for artillery because artillery by default had a reaaally nasty habit of auto-firing at close-range enemies on a huuge delay. so, what would happen was: first zombie went up, artillery would turn and begin loading, zombie would go to nearest artillery craft and suicide, blowing up several. all artillery would fire, blowing up even more. second zombie up, artillery lock-and-load, zombie makes a beeline for.... you get the idea.

anyway the idea was good enough that it ended up on the hints-and-tips page. turns out that the people who we played were some of the people who worked at activision :)

about 3 months ago

Imparting Malware Resistance With a Randomizing Compiler

lkcl malware with randomisation (125 comments)

huh. this sounds very similar to the theoretical virus designs i came up with many years ago. yes, you heard right: turn it round. instead of the programs on the computer being randomised so that they are resistant to malware attacks, randomise the *malware* so that it is resistant to *anti-virus* detection. the model is basically the flu or common cold virus.

here's where it gets interesting: comparing the use of randomisation in malware vs randomisation in defense against malware, it's probably going to start being used in malware before it gets used in defending against malware. why? because malware attackers have nothing to lose. unfortunately, they are likely to keep their compilers secret. even *more* unfortunately, successful creation of anti-malware randomising compilers means that the malware attackers can use them as well.

but, that is just a risk that has to be taken, and make sure a decent job is done of it.

about 3 months ago

Official MPG Figures Unrealistic, Says UK Auto Magazine

lkcl Re:Which is why sometimes small engines ... (238 comments)

Whereas with a bigger engine this is less of the case and you can get equivalent mpg

ah, i wrote a diesel truck simulator in 1993 for Pi Technology: there is actually much more to it than that. with a bigger engine with higher torque it is possible to have the vehicle drive more often in its peak torque range where it has either better acceleration or better fuel economy or both.

with a smaller engine the effect you mention - that people put their foot to the floor - means that the engine has to rev its nuts off and thus operates waaay outside of its efficiency band.

about 3 months ago

Official MPG Figures Unrealistic, Says UK Auto Magazine

lkcl Re:watch the program on 5th gear (238 comments)

you need to watch that program. have you watched the program yet? what did the program get across to you, and can you put it better than i can?

about 3 months ago

Official MPG Figures Unrealistic, Says UK Auto Magazine

lkcl watch the program on 5th gear (238 comments)

before making *any* judgement you *need* to watch the program on 5th gear which covers exactly this question in some detail. basically the test was designed originally for people driving sensibly, and it was designed i think well over 20 possibly even 30 years ago. so it has a very *very* gentle acceleration and deceleration curve. gentle acceleration because that is not only fuel-efficient but also the cars of that time simply could not accelerate that much, and gentle braking because again that is more fuel-efficient but also because if you had drum brakes they would overheat.

people no longer drive sensibly: they are more aggressive with other drivers (not keeping a safe distance), they put their foot down hard on the accelerator and they put their foot down hard on the brake. also as the cars are more reliable they tend to not maintain them properly: until i watched another program on 5th gear about how badly old oil affects fuel economy and the lifetime of the engine i had absolutely no intention of changing oil regularly in the decade-year-old cars i buy.

so, in effect, people should stop complaining and start driving in more fuel-efficient ways... *regardless* of how aggressive the person behind them gets when they set off from the lights at the same acceleration rate as a 40 tonne cargo lorry. that's the other person's problem.

about 3 months ago

It's Time For the Descent Games Return

lkcl love descent (251 comments)

i love descent, and i love that it's now software libre. i hope the guy who maintains d2x has stopped being an idiot by including patched versions of standard libraries such as libsdl without providing an option to replace them and forcing the patched versions to overwrite pre-installed software, but yes - awesome.

the thing about descent was that it was the first game with 6 degrees of freedom. i actually bought a special joystick that was capable of dealing with it (one designed for flight simulators) and after 2 to 3 weeks of practicing i was competent at side-motion circular slides firing at a target at the centre. the first 2 weeks were spent mostly getting motion sickness and having the nose of the craft bashed against a corner :)

it was also fun to watch spectators swaying from watching the screen! but, again, after a couple of weeks you got used to it, both as a player and as a spectator.

yeah - to those people who set up LAN parties: i hear ya :) i did the same. i think the lowest spec i got away with was a 486 SX 25 with 12mb of RAM, setting the screen to 320x240 and it was just about tolerable. i had to use 10-Base-T with terminators for goodness sake - what the heck i was doing with 5 networked computers in my house back in 1996 with just a 28kbaud modem i _really_ don't know!

so yes, absolutely: descent (the software libre version *or* a commercial version) gets my vote... *as long as* it has a community portal similar to that of Dark Reign, with a chat room so that people can meet other players, set up a match and play. that is bizarrely what's missing from bzflag: although bzflag has an in-game chat it doesn't hatve out-game community chat, very odd.

also, it would be awesome to see planetary-surface action as well, not just in mines (no matter how large). i always felt a little claustrophobic and the attack vectors would be very different in free space... interesting to think about the possibilities here, hmmm :)

about 2 months ago

Ask Slashdot: Practical Alternatives To Systemd?

lkcl Re:depinit (533 comments)


"i have never even seen a PAM module which does this trick. it would be awesome to do the same trick for ssh as well."
you mean like pam_ssh for ssh keys or if you just want it to work with gpg and ssh you could also run the gnome key manager as I do.
True single sign on with all ssh and gpg keys.

no not pam_ssh. not "ask for a 2nd passphrase at a 2nd prompt which is entered into the ssh system to unlock the ssh key" - have ABSOLUTELY NO login credentials AT ALL, and LITERALLY use the success/fail of the ssh passphrase (or gpg passphrase) unlocking *AS* the login. no /etc/shadow, no password field in /etc/passwd - nothing BUT unlock the gpg or ssh key.

about 3 months ago

Former NSA Director: 'We Kill People Based On Metadata'

lkcl project "insight" from captain america 2 (155 comments)

so what's the difference between the NSA's plan and Hydra's plan in Captain America Winter Soldier? absolutely nothing as far as i can tell. can anyone tell me if i am mistaken?

about 3 months ago

Ask Slashdot: Practical Alternatives To Systemd?

lkcl depinit (533 comments)

depinit. written by richard lightman because he too did not trust the overcomplexity of sysv initscripts and wanted parallelism, it was adopted by linux from scratch and seriously considered for adoption in gentoo at the time. richard is extremely reclusive and his web site is now offline: you can get a copy of depinit however using

using depinit in 2006 i had a boot to X11 on a 1ghz pentium in 17 seconds, and a shutdown time of under three. depinit has two types of services: one is the "legacy" service (supporting old style /etc/init.d/backgrounddaemon) and the other relied on stdin and stdout redirection. in depinit you can not only chain services together for their dependencies but also chain their *stdin and stout* _and_ stderr together.

that has some very interesting implications. for example: rather than have some stupid system which monitors /var/log/apache2/logfile for security alerts or /var/log/auth.log for sshd attacks, what you do is run sshd or apache2 as a *foreground* service outputting log messages to stderr, chained to a "security analysis" service which then chains to a log file service.

the "security analysis" service could then *immediately* check the output looking for unauthorised logins and *immediately* ban repeat offenders by blocking their IP address, rather than having to either poll the files (with associated delays and/or CPU untilisation) or have some insane complex monitoring of inodes which _still_ has associated delays.

also depinit catches *all* signals - not just a few - and allows services to be activated based on those signals. richard also had a break-in on one system, and they deployed the usual fork-and-continue trick, so he wrote some code which allowed the service-stopping code to up the agressiveness on hunting down and killing child processes. this also turned out to be very useful in cases where services went a bit awry.

basically the list of innovations that richard added to depinit is very very long, in what is actually an extremely small amount of code. i simply haven't the space to list them all, and no, richard was not a fan of network-manager either.

btw you might also want to look at the replacement for /bin/login that richard wrote. it was f****g awesome. basically what he did was use gpg key passphrases as the login credentials.... and ran gpg-agent automatically as part of the *login*. i have never even seen a PAM module which does this trick. it would be awesome to do the same trick for ssh as well.

it's fascinating what someone can get up to when they have the programming skill and the logical reasoning abilities to analyse existing systems that everyone else takes for granted, work out that those sytems are actually not up to scratch and can write their *own* replacements. it's just such a pity that nobody seems to have noticed what he achieved.

about 3 months ago

Ask Slashdot: Books for a Comp Sci Graduate Student?

lkcl learn how to learn (meta-learning) (247 comments)

there is actually something which is far more useful to be able to do, more than any amount of books read, which is only really possible effectively and efficiently now that internet searches are possible (and quick, and accurate), and that is meta-learning. in its crudest most disparaging form one might mistakenly call this cut-and-paste programming but it is actually nothing of the sort.

basically what you do is treat everything as a black box, and use the principles of the 6 different types of knowledge (listed on the wikipedia page for Advaita Vedanta, which is mentioned specifically because the western word Epistemology is woefully inadequate) to basically reverse-engineer the subject matter and in effect teach yourself *on the go* by way of analysing the results achieved, even though you are starting out from quite literally zero knowledge.

it does however take a hell of a lot of balls to do this *whilst being paid* and most employers simply will not believe you when you tell them that this is something that you can do... and be *more effective* at applying this technique than people who have been explicitly trained or quotes have experience quotes in the field.

to be fair to those people who genuinely do have experience, often such people *may* have encountered the circumstances before, such that they *may* have the answer much quicker than you-who-has-no-experience-at-all, *but*, the critical critical thing that you need to tell prospective employers is: what happens when something falls *outside* of the experience of the person who quotes has experience quotes? whom then would the employer rather have (if they had to choose one or the other rather than both people) - the person who will get there in the end, regardless of what they are asked to do, or would they rather have the person who can get there *most* of the time but who does not have the skills or intelligence to work out the all-important remaining last 10% of the job, without which the contract will remain unfulfilled and the company will go bust because of it?

in short: no amount of reading will substitute for learning how to learn and applying that skill *every single moment of your life*. when i hear people say i am too old to learn it makes me cringe, and i feel sad for them - i cannot say anything so i have to remain silent - but i feel sad for them because i know that inside they have given up. the only time to give up learning is when you are actually dead, and not before!!!

about 4 months ago



Power-loss-protected SSDs tested: only Intel S3500 passes

lkcl lkcl writes  |  about 8 months ago

lkcl (517947) writes "After the reports on SSD reliability and after experiencing a costly 50% failure rate on over 200 remote-deployed OCZ Vertex SSDs, a degree of paranoia set in where I work. I was asked to carry out SSD analysis with some very specific criteria: budget below £100, size greater than 16Gbytes and Power-loss protection mandatory. This was almost an impossible task: after months of searching the shortlist was very short indeed. There was only one drive that survived the torturing: the Intel S3500. After more than 6,500 power-cycles over several days of heavy sustained random writes, not a single byte of data was lost. Crucial M4: fail. Toshiba THNSNH060GCS: fail. Innodisk 3MP SATA Slim: fail. OCZ: epic fail. Only the end-of-lifed Intel 320 and its newer replacement the S3500 survived unscathed. The conclusion: if you care about data even when power could be unreliable, only buy Intel SSDs."
Link to Original Source

QiMod / Rhombus Tech A10 EOMA-68 CPU Card running Debian 7 (armhf)

lkcl lkcl writes  |  about a year ago

lkcl (517947) writes "With much appreciated community assistance, the first EOMA-68 CPU Card in the series, based on an Allwinner A10 processor, is now running Debian 7 (armhf variant). Two demo videos have been made. Included in the two demos: fvwm2, midori web browser, a patched version of VLC running full-screen 1080p, HDMI output, powering and booting from Micro-HDMI, and connecting to a 4-port USB Hub. Also shown is the 1st revision PCB for the upcoming KDE Flying Squirrel 7in tablet.

The next phase is to get the next iteration of test / engineering samples out to interested free software developers, as well as large clients, which puts the goal of having Free Software Engineers involved with the development of mass-volume products within reach."

Link to Original Source

Rhombus Tech 2nd revision A10 EOMA68 Card working samples

lkcl lkcl writes  |  about a year ago

lkcl (517947) writes "Rhombus Tech and QiMod have working samples of the first EOMA-68 CPU Card, featuring 1GByte of RAM, an A10 processor and stand-alone (USB-OTG-powered with HDMI output) operation. Upgrades will include the new Dual-Core ARM Cortex A7, the pin-compatible A20. This is the first CPU Card in the EOMA-68 range: there are others in the pipeline (A31, iMX6, jz4760 and a recent discovery of the Realtek RTD1186 is also being investigated).

The first product in the EOMA-68 family, also nearing a critical phase in its development, will be the KDE Flying Squirrel, a 7in user-upgradeable tablet featuring the KDE Plasma Active Operating System. Laptops, Desktops, Games Consoles, user-upgradeable LCD Monitors and other products are to follow. And every CPU that goes into the products will be pre-vetted for full GPL compliance, with software releases even before the product goes out the door. That's what we've promised to do: to provide Free Software Developers with the opportunity to be involved with mass-volume product development every step of the way. We're also on the look-out for an FSF-Endorseable processor which also meets mass-volume criteria which is proving... challenging."

Link to Original Source

Rhombus Tech 2nd revision A10 EOMA68 Card

lkcl lkcl writes  |  about a year ago

lkcl writes "The 2nd revision of the A10 EOMA-68 CPU Card is complete and samples are due soon: one sample is due back with a Dual-Core Allwinner A20. This will match up with the new revision of the Vivaldi Spark Tablet, codenamed the Flying Squirrel. Also in the pipeline is an iMX6 CPU Card, and the search is also on for a decent FSF-Endorseable option. The Ingenic jz4760 has been temporarily chosen. Once these products are out, progress becomes extremely rapid."
Link to Original Source

Rhombus Tech AM389x/DM816x EOMA-68 CPU Card started

lkcl lkcl writes  |  about 2 years ago

lkcl writes "The Rhombus Tech Project is pleased to announce the beginning of a Texas Instruments AM389x/DM816x EOMA-68 CPU Card: thanks to earlier work on the A10 CPU Card and thanks to Spectrum Digital, work on the schematics is progressing rapidly. With access to more powerful SoCs such as the OMAP5 and Exynos5 being definitely desirable but challenging at this early phase of the Rhombus Tech initiative, the AM3892 is powerful enough (SATA-II, up to 1600mhz DDR3 RAM, Gigabit Ethernet) to still take seriously even though it is a 1.2ghz ARM Cortex A8. With no AM3892 beagleboard clone available for sale, input is welcomed as to features people would like on the card. The key advantage of an AM3892 EOMA-68 CPU Card though: it's FSF Hardware-endorseable, opening up the possibility — at last — for the FSF to have an ARM-based tablet or smartbook to recommend. Preorders for the AM3892 CPU Card are open."
Link to Original Source

Rhombus Tech A10 EOMA-68 CPU Card schematics completed

lkcl lkcl writes  |  about 2 years ago

lkcl writes "Rhombus Tech's first CPU Card is nearing completion and availability: the schematics have been completed by Wits-Tech. Although it appears strange to be using a 1ghz Cortex A8 for the first CPU Card, not only is the mass-volume price of the A10 lower than other offerings; not only does the A10 classify as "good enough" (in combination with 1gb of RAM); but Allwinner Tech is one of the very rare China-based SoC companies willing to collaborate with Software (Libre) developers without an enforced (GPL-violating) NDA in place. Overall, it's the very first step in the right direction for collaboration between Software (Libre) developers and mass-volume PRC Factories. There will be more (faster, better) EOMA-68 CPU Cards: this one is just the first."
Link to Original Source

Google+ Identity Fraud

lkcl lkcl writes  |  about 2 years ago

lkcl writes " outlines the problem with Google+ as an "identity" service, but nowhere does this page discuss any compelling down-sides for Google themselves. One is the risk of lawsuits where people *relied * on Google+, were lulled into a false sense of security by Google+, failed to follow standard well-established online internet identity precautions, and were defrauded as a *direct* result of Google's claims of "safety". Another is the legal cost of involvement in, and the burden of proof that would fall onto Google in identity-fraud-related cases of online stalking, internet date rape and murder. Can anyone think of some other serious disadvantages that would compel google to rethink its google+ identity policy? I would really like to use Google Hangouts, but I'll be damned if i'll use it under anything other than under my 25-year-established pseudonym, "lkcl". What's been your experience with applying for an "unreal" identity?"
Link to Original Source

Pyjamas Domain hijacked

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "The domain name for the pyjamas project,, was hijacked today by some of its users. The reasons: objections over the project leader's long-term goal to have pyjamas development be self-hosting (git browsing, wiki, bugtracker etc. all as Free Software Licensed pyjamas applications). Normally if there is disagreement, a Free Software Project is forked: a new name is chosen and the parting-of-the-ways is done if not amicably but at least publicly. Pyjamas however now appears to have made Free Software history by being the first project to have its domain actually hijacked. rather embarrassingly, in the middle of a publicly-announced release cycle. Has anything like this ever happened before?"
Link to Original Source

B2G's Store and Security Model

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "Boot to Gecko is a full and complete stand-alone Operating System that is to use Gecko as both its Window Manager and Applications UI. Primarily targetted at smartphones, security and the distribution of applications are both facing interesting challenges: scaling to mass-volume proportions (100 million+ units). The resources behind Google's app store (effectively unlimited cloud computing) are not necessarily guaranteed to be available to Telcos that wish to set up a B2G store. Although B2G began from Android, Mozilla's primary expertise in the development of Gecko and in the use of SSL is second to none. There is howevera risk that the B2G Team will rely solely on userspace security enforcement (in a single executable) and to try inappropriate use of CSP, Certificate pinning and other SSL techniques for app distribution, resulting in some quite harmful consequences that will impact B2G's viability. The question is, therefore: what security infrastructure surrounding the stores themselves as well as in the full B2G OS itself would actually be truly effective in the large-scale distribution of B2G applications, whilst also retaining flexibility and ease of development that would attract and retain app writers?"
Link to Original Source

EOMA-PCMCIA modular computer aiming for $15 and Fr

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "An initiative by a CIC company Rhombus Tech aims to provide Software (Libre) Developers with a PCMCIA-sized modular computer that could end up in mass-volume products. The Reference Design mass-volume pricing guide from the SoC manufacturer, for a device with similar capability to the raspberrypi, is around $15: 40% less than the $25 rbpi but for a device with an ARM Cortex A8 CPU 3x times faster than the 700mhz ARM11 used in the rbpi. GPL Kernel source code is available. A page for community ideas for motherboard designs has also been created. The overall goal is to bring more mass-volume products to market which Software (Libre) Developers have actually been involved in, reversing the trend of endemic GPL violations surrounding ARM-based mass-produced hardware. The Preorder pledge registration is now open (account creation required)."
Link to Original Source

Where are the Ultra-efficient production Hybrid EV

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "Has anyone else wondered why ultra-efficient hybrid vehicles have to look like this, why the Twizy doesn't have doors as standard and has leased batteries, or why the Volkswagen XL1 does 313mpg but only seats 2 people and isn't yet in production? Why were both Toyota's RAV4-EV as well as GM's EV1 not just discontinued but destroyed? Against this background, what makes this 3-seat Hybrid EV design different, and what could make it successful? Although this article on outlines the problem, the solution isn't clear-cut, so how can ultra-efficient affordable hybrids actually end up on the road?"
Link to Original Source

An accidental Free Software Accelerated 3D GPU

lkcl lkcl writes  |  about 3 years ago

lkcl writes "In evaluating the Xilinx Xilinx Zynq-7000 for use in a FSF Hardware-endorsed Laptop and possible OpenPandora v2.0, a series of Free Software projects were accidentally linked together — Gallium3D and LLVM 2.7's MicroBlaze FPGA Target. The combination is the startling possibility that the Xilinx Zynq-7000 may turn out to be the perfect platform for a Free Software 3D GPU, for use in Tablets, Laptops, and the OpenGraphics Project. entirely by accident."
Link to Original Source

RISC Notebooks: does 28nm make all the difference?

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option. The Toshiba AC100 and the HP/Compaq Airlife 100 are classic examples.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. The MIPS GS464V, designed by China's ICT, has such high SIMD Vector performance that it will be capable of 100fps 1080p at 1ghz on a single core, and has hardware assisted accelerated emulation of over 200 x86 instructions. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. And Gaisler have a SPARC-compatible core, the LEON4, which can be configured in anything up to 8 cores, and run at up to 1.5ghz in 30nm, giving an impressive 1.7DMIPS/Mhz performance per core that matches that of both the MIPS 1074k and the ARM Cortex A9 designs.

Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around an estimated $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s, bringing the price of an impressive desktop system easily down to $80 retail and a decent laptop to $150.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210, and for Nusmart's 2ghz 2816? Why are we not seeing any products with decent screens and memory from mainstream companies like Dell, IBM and HP, but are instead seeing a rash of low-performance low-quality GPL-violating Chinese-made Android-based knock-offs, touted as "web-ready", with webcams and microphones that don't even work?

What's it going to take for these alternative processors to hit mainstream? Do we really have to wait for 24nm or less, where it would be possible to run these RISC cores at ungodly 4ghz speeds or above, when 20,000 tiny RISC cores could fit on a single wafer resulting in prices of $4 to $5 per CPU? Or, with the rise of Android and GNU/Linux Operating Systems, would a lowly 28nm multi-core RISC-based System-on-a-Chip be enough for most peoples' needs?"

Link to Original Source

ARM or MIPS Notebooks: does 28nm make a difference

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210 and for Nusmart's 2ghz 2816?"

Link to Original Source

FreedomBox Foundation hits target in 5 days

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "The FreedomBox Foundation hit its minimum target of $60,000 in just 5 days, thanks to KickStarter Pledges, and seeks further contributions to ensure that the Project is long-term viable. Curiously but crucially, the FreedomBox fund is for Software only, yet neither suitable low-cost $30 ARM or MIPS "plug computers", envisaged by Eben Moglen as the ideal target platform, nor mid-to-high-end ARM or MIPS low-cost developer-suitable laptops actually exist. What do slashdot readers envisage to be the way forward, here, given that the goals of the FreedomBox are so at odds with mass-market Corporate-driven hardware design decisions?"
Link to Original Source

Toshiba AC100 Linux 2.6.29 Kernel Source available

lkcl lkcl writes  |  more than 3 years ago

lkcl (517947) writes "Toshiba Digital Media Group, Japan, kindly responded to a request for all GPL source code and supplied it on CD. The kernel source has been uploaded to the arm-netbook alioth git repository (branch ac100/2.6.29/lkcl). The AC100 has already been hacked, rooted and sadly ubuntu'd as noted on debian-arm. Availability of the "official" kernel source should make getting WIFI etc. somewhat easier. Two key questions remain, though: why does such a fantastic machine with a top-end dual core ARM Cortex A9 CPU only come with 512mb of RAM, and why supply only the truly dreadful and unusable 1024x600 resolution LCD when it is known to be the cause of so many negative reviews?"
Link to Original Source

Open University Linux Course Irony

lkcl lkcl writes  |  more than 4 years ago

lkcl (517947) writes "A new Open University course, Linux T155 aims to teach the benefits of Linux and Free Software, including the philosophy and history as well as the practical benefits of being virus-free and being able to prolong the working life of hardware. Unfortunately, in a delicious piece of irony, potential Tutors who stand by Free Software principles and thus are best suited to apply for a teaching post must violate the very principles they are expected to instil, by filling in a Microsoft Word formatted application form. An article on the Advogato Free Software Advocacy site describes the ways in which changing the "accidental" policy of using Proprietary File formats has succeeded and where it has failed."
Link to Original Source

python converted to javascript: executed in-browse

lkcl lkcl writes  |  more than 4 years ago

lkcl writes "Two independent projects Skulpt and Pyjamas are working to bring python to the web browser (and the javascript command-line) the hard way: as javascript. Skulpt already has a cool python prompt demo on its homepage; Pyjamas has a gwtcanvas demo port and a GChart 2.6 demo port. Using the 64-bit version of google v8 and PyV8, Pyjamas has just recently successfully run its python regression tests, converted to javascript, at the command-line. (Note: don't try any of the above SVG demos with FF2 or IE6: they will suck.)"
Link to Original Source

Python version of GWT spawns port of GChart

lkcl lkcl writes  |  more than 4 years ago

lkcl writes "GChart is a sophisticated graph and charting library, written in java for the popular Google Web Toolkit framework. Using a semi-automated java to python conversion tool a reasonably useable but not entirely bug-free version of GChart's 19,000 lines of code has been ported, in under three days, to the Pyjamas Desktop/Web Widget Set. Whilst development is primarily taking place using Pyjamas-Desktop, an online demo of the javascript compiled version can be seen here (note: reduce performance expectations accordingly, if using IE6 or FF2)."
Link to Original Source

Nerd Needs Help With Webkit Python Fiasco

lkcl lkcl writes  |  about 5 years ago

lkcl writes "I'm in need of slashdot advice and help, as I recognise that I'm taking the wrong approach. Background: I'm a free software developer and, to put it charitably, "I don't get out much" (i.e. I don't see why people have difficulty with what I do). As a result, I've been banned from about five major free software projects mailing lists, and blamed for causing problems (but often I then see reports months later, of other people encountering the same thing with the same team!). I decided a year ago to put python on a par with javascript when it comes to DOM bindings, and have ported pyjamas to XULrunner, webkit, and recently IE's MSHTML. The trouble I'm having is with webkit (webkit doesn't have DOM python bindings, but IE and XULRunner do). At around 300 comments, we've got past the roaring bun-fight stage, and just got to the point where things were finally moving along, when one of the webkit maintainers decided to engineer an excuse to disable my account. Ordinarily, I'd leave this alone, but I feel that this complex project — of making python truly the equal of javascript when it comes to web application development — is too important to just let it go. So — seriously: I'm not messing about, here; I'm not looking for an excuse to whinge; I truly need some advice and help because i am absolutely not going to quit on this one. What do you feel needs to be done, to get Webkit its free software python bindings?"


lkcl has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>