Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

How 4H Is Helping Big Ag Take Over Africa

lkcl Re:So, does water cost more? (377 comments)

What are the possible choices for farmers?

1. grow crappy crops with free seeds and lots of expensive water,
2. grow good groups with seeds that you need to pay for but use less water?

#2 will make you more money, so the cost of the seeds is a non-factor. #1 will make you poor, because when it doesn't rain your crops die.

So, what exactly is the issue?

this is a completely wrong analysis. if (2) was true those people would have been dead centuries or millenia ago. the fact that they are still alive tells you that they get by, and that, honestly, is good enough.

there was an attempt a few decades ago to do exactly what DuPont is doing [again]. i do not understand why 1st world countries do not leave the 3rd world alone to grow their own food. 1st world conditions are NOT THE SAME as 3rd world conditions.

the study that i heard about was exactly the same situation. a 3rd world country which had extremely poor yields was interfered with by a 1st world country providing donations of high-yield maize. for three to four years the success of the trials resulted in bumper crops and the surrounding farmers clambered onto the 1st world genetic variety maize.

then there was a drought.

the high-yield 1st world maize died, and the entire area went into famine. next year, because nothing had grown, nobody had any food the year after, either.

basically it turned out that the low-yield maize had a MASSIVE genetic diversity. some variants thrived in good conditions, some grew successfully *EVEN IN DROUGHT CONDITIONS*. no matter what happened, those people always got some food. not necessarily a lot, but enough so that they didn't die.

now the problem was with this stupid, stupid interference by a 1st world country was that because everyone in the area had converted over to this wonderful high-yield maize, NOBODY HAD ANY OF THE OLD GENETIC VARIETY LEFT.

it was a decade before the country properly recovered, and that was just from one drought.

so the conclusion is, unescapably, that DuPont is intent on killing people just to make a profit, as this isn't the first time that providing 1st world maize to 3rd world countries has gone very very wrong.

just leave them alone. we *DON'T* know better.

about two weeks ago
top

Computer Scientists Say Meme Research Doesn't Threaten Free Speech

lkcl what's the threat? (109 comments)

this is pure speculation here, but my guess is that the people (politicians) protesting this research are quite likely to be the ones in charge of classified funding efforts for military, espionage and CIA equivalent research... and deployment of those same tools. if you've ever read Neal Stephenson's book "Cobweb" you'll know exactly what is most likely to be going on.

so, in essence, those people (politicians) know damn well that the espionage, domestic and political manipulation tools that they funded are quite likely to show up as anomalous activity should there ever be any tools (such as Truthy) provided to the general public, or any kind of research done to ascertain which "memes" *should* spread and which should not. for if there is anything that is detected which is *different* from normal expectations (a meme spread when it shouldn't have, and oh incidentally what was the source of that disruptive influence again?) it's really not going to go down too well with the people who *already* manipulate us from the shadows.

so i think you'll find that the people (politicians) protesting most loudly are the ones who are using media manipulation tools, and they're afraid that this research will be used to identify them, basically.

about two weeks ago
top

Interviews: Ask CMI Director Alex King About Rare Earth Mineral Supplies

lkcl landfill sites (62 comments)

yes, i definitely have a question. i heard the statistic that the concentration of heavy and rare earth metals is now *higher* in landfill sites than it is in the original mines that they came from, which, if true, is a global disgrace for which all of us are responsible. firstly, is this actually true, and secondly, is anyone doing anything about the extraction of rare earth metals from the electronics in which they were originally embedded?

about three weeks ago
top

Ask Slashdot: Can You Say Something Nice About Systemd?

lkcl no. (928 comments)

systemd violates the principles of unix, adding massive amounts of completely unnecessary complexity. there is absolutely nothing good to say about it.

about three weeks ago
top

Microsoft Works On Windows For ARM-Based Servers

lkcl Windows NT 3.5 (113 comments)

wasn't NT 3.5 available for ARM, DEC Alpha, Power PC *and* x86? wasn t the core of the NT kernel based on the Mach kernel, and written almost exclusively in c? so what the hell went wrong??

about three weeks ago
top

Dwarf Galaxies Dim Hopes of Dark Matter

lkcl statistically (137 comments)

i think at some point some scientists somewhere will work out that the statistical evidence is growing to show, more and more, that dark matter *doesn't* exist...

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:Over-emphasizing (98 comments)

PPS: Given your custom IPC for Python, could you go us one further and write an OSGi for Python using it? Pretty please! ;)

:) i'd love to but sadly it's one of the [few] contracts where i was in a proprietary environment. if i meet a software libre project some time in the future that needs that kind of stuff i'll certainly attempt to recreate it but it would need to be at least a year before i consider that.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:database performance (98 comments)

That's not loadavg, that's IO latency. You should probably be using iostat to get useful numbers.

oo, thank you very much for that tip, i'll try to pass it on and will definitely remember it for the next projects i work on. thank you.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:(not)perplexingly (98 comments)

It doesn't matter how awesome someone thinks their Python-LMDB project is. It doesn't matter how important someone thinks their Python-LMDB project is.

the mistake you've made has been raised a number of times in the slashdot comments (3 so far). the wikipedia page that was deleted was about LMDB, not python-lmdb. python-lmdb is just bindings to LMDB and that is not notable in any significant way.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Over-emphasizing (98 comments)

CPython is a compiler.

it's an interpreter which was [originally] based on a FORTH engine.

  It compiles Python source code to Python bytecode,

there is a compiler which does that, yes.

and the Python runtime executes the compiled bytecode.

it interprets it.

CPython has one major weakness, the GIL (global interpreter lock).

*sigh* it does. the effect that this has on threading is to reduce threads to the role of a mutually-exclusive task-switching mechanism.

I've seen the GIL harm high-throughput, multi-threaded event processing systems not dissimilar from the one you describe.

yes. you are one of the people who will appreciate, given that the codebase could not be written in (or converted to) any other language, due to time-constraints, that using processes and custom-written IPC because threads (which you'd think would be perfect to get high-performance on event processing because there would be no overhead on passing data between threads) couldn't be used, means that the end-result is going to be... complicated.

If you must insist on Python and want to avoid multi-threaded I/O bound weaknesses of the GIL, then use Jython.

not a snowball in hell's chance of that happening :) not in a milllion years. not on this project, and not on any project i will actively and happily be involved in. and *especially* i cannot ever endorse the use of java for high performance reliable applications. i'm familiar with python's advantages and disadvantages, the way that the garbage collector works, and am familiar with the size of the actual python interpreter and am happy that it is implemented in c.

java on the other hand i just... i don't even want to begin describing why i don't want to be involved in its deployment - i'm sure there are many here on slashdot happy to explain why java is unsuitable.

there are many other ways in which the limitation of threads in python imposed by the GIL may be avoided. i chose to work around the problem by using processes and custom-writing an IPC infrastructure using edge-triggered epoll. it was... hard. others may choose to use stackless python. others may agree with the idea to use jython, but honestly if the application was required to be reasonably reliable as well as high-performance there would be absolutely no way that i could ever endorse such an idea. sorry :)

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Do not use joins (98 comments)

if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that

You should pipe it to /dev/nul. That's webscale.

don't jest... please :) jokes about "you should just have a big LED on the box with a switch and a battery" _not_ appreciated :)

but, seriously: the complete lack of need in this application for joins (as well as any other features of SQL or NOSQL databases) was what led me to research key-value stores in the first place.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:Would it hurt ... (98 comments)

A lot of the locking semantics you mentioned sound pretty similar to RCU which is used extensively in the Linux kernel, and allows for lockless reading on certain architectures.

http://en.wikipedia.org/wiki/R... .... yes, i think so. now imagine that all the copying is done by the OS using the OS's virtual memory page-table granularity (so does not have any very very very significant overhead). and also imagine that the library is intelligent enough to move the older page into its record of free pages during a cleanup phase that doesn't cost very much either. and also remember that on accessing B+ trees to find a record you only need to know the "top" (root) node... so you can update (or create) using those COW semantics as many B+ tree nodes as you like, knowing that it's *only* the root node that you need (after the fact) to tell (new) readers about... ... and now it's no longer expensive to do those RCU style operations, and the performance is streets ahead of any other key-value store.

but i am not an expert on these things. i'm sure that if howard chipped in here (and he _is_ an expert on the linux kernel and on high-performance efficient algorithm implementation) he'd be able to tell you more and probably a lot more accurately than i can.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:Oh my... (98 comments)

The use cases for LMDB are pretty limited.

weeelll.... the article _did_ say "high performance", so there are some sacrifices that can be made especially when those features provided by SQL databases are clearly not even needed.

basically what was needed then was to actually *re-implement* some of the missing features (indexes for example) and that took quite some research. it turns out that (after finding an article written by someone who has implemented a SQL database using the very same key-value stores that everyone uses) you can implement secondary indexes *using* a key-value store with range capabilities by concatenating the value that you wish to have range-search on with the primary key of the record that you wish to access, and then storing that as the key with a zero-length value in the secondary-index key-value store.

this was what i had to implement - directly - in python, to provide secondary indexing using timestamps so that records could be deleted for example once they were no longer needed. it was actually incredibly efficient, *because of the performance of LMDB*.

so... yeah. didn't need SQL queries. added some basic secondary-indexing manually. got the transactional guarantees directly from the implementation of LMDB. got many other cool features....

please remember that i am keenly aware that SQLite, MySQL and i think even PostgreSQL can now be compiled to use LMDB as its back-end data store... but that the application was _so demanding_ that even if that had been done it still would not have been enough.

but, apart from that: i don't believe you are correct in saying that there are a limited number of use cases for LMDB *itself* - the statement "there are a limited number of use cases for range-based key-value stores" *might* be a bit more accurate, but there are clearly quite a _lot_ of use cases for range-based key-value stores [including as the back-end of more complex data management systems such as SQL and NOSQL servers].

this high-performance task scheduler application happens to be one of them... and the main point of the article is that, amongst the available key-value stores currently in existence, my research tells me that i picked the absolute best of them all.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:Did you make any effort to get this undeleted? (98 comments)

I apologize for that, I was wrong and spoke too quickly. If you can find notable sources for P-LMDB, then it's worth a shot bringing it to that user's attention.

hey not a problem. you're right about py-lmdb - my main concern is to get LMDB the recognition that its peer stores (such as BerkeleyDB) already have: http://en.wikipedia.org/wiki/B... - someone else mentioned that there are other such key-value stores (some of them at the same development period as LMDB) which already have articles. and it's that an *oracle* employee marked the page for deletion that's the main issue of contention here.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl database performance (98 comments)

The author got poor performance from a SQL database with no indexing, which degraded as the number of records grew? You don't say! A database that has to do a full scan for reads performs poorly?

yes. it was that i had to do that analysis in a formal repeatable independent way, which i had never done before, and i was very surprised at the poor results. i was at least expecting a *consistent* and reliable rate of... well, i don't know: i was kinda expecting PostgreSQL to be top of the list and i was kinda expecting it to reach 100,000 or 200,000 records per second... and it just... couldn't. i was *completely* caught off-guard by the need to switch off all the safety checks, and by how dramatic the effect on performance of adding indexes really was.

  so it was then by complete contrast that, for example, the py-lmdb benchmarks got an ORDER OF MAGNITUDE better sequential-read-speeds (2.5 million per second) than i was expecting that made me really sit up and take notice.

Surprise about load average seems equally naive. If you fork a bunch of processes that are doing IO, of COURSE the load increases. Load is a measure of the number of processes not sleeping. That's all it is. I don't understand his surprise that a system steadily doing a great deal of IO would show a lot of time spent in IO calls in profiling.

you've missed the point. it was that the exact same design using 20 (or so) shm file handles instead of 200 file handles opening to the exact same data (effectively) resulted in a reasonable loadavg, whereas having the 200 file handles open had a loadavg that ground the system completely to a halt.

so it's not the *actual* loadavg that is relevant but that the *relative* loadavg before and after that one simple change was so dramatically shifted from "completely unusable and in no way deployable in a live production environment" to a "this might actually fly, jim" level.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Submitter doesn't understand Wikipedia notability (98 comments)

Never mind what projects use it; what have independent reliable sources written about LMDB?

i've written something and i'm pretty wubwubwubreliawibble oh look pretty coloured lights...

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Re:Did you make any effort to get this undeleted? (98 comments)

there isn't a python-lmdb wikipedia article, and one has never been created. the discussion involves the LMDB page (not the python bindings) despite LMDB having significant notable uses.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Oh my... (98 comments)

"a high-performance task scheduling engine written (perplexingly) in Python"

guys, there is this thing, it's called "algorithm"....

yeah.... except that algorithm took a staggering 3 months to develop. and it wasn't one algorithm, it was several, along with creating a networking IPC stack and having to create several unusual client-server design decisions. i can't go into the details because i was working in a secure environment, but basically even though i was the one that wrote the code i was taken aback that *python* - a scripted programming language - was capable of such extreme processing rates.

normally those kinds of speed rates would be associated with c for example.

but the key point of the article - leaving that speed aside - is that if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that, over the long term, because of the overwhelming overhead associated with SQL (and NoSQL) databases maintaining transaction logs and making other guarantees in ways that are clearly *significantly* less efficient than the ways that LMDB do it, by way of those guarantees being integrated at a fundamental design level into LMDB.

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl I can't wait for it (98 comments)

At some point there will be an article on Wikipedia, that only meets Wikipedia's notability requirements due to media spillover complaining about the notability requirements.

yaaay! :) works for me. wasn't there a journalist who published a blog and used that as the only notable reference to create a fake article? :)

about a month ago
top

Python-LMDB In a High-Performance Environment

lkcl Would it hurt ... (98 comments)

OpenLDAP was originally using Berkeley DB, until recently. they'd worked with it for years, and got fed up with it. in order to minimise the amount of disruption to the code-base, LMDB was written as a near-drop-in replacement.

LMDB is - according to the web site and also the deleted wikipedia page - a key-value store. however its performance absolutely pisses over everything else around it, on pretty much every metric that can be measured, with very few exceptions.

basically howard's extensive experience combined with the intelligence to do thorough research (even to computing papers dating back to the 1960s) led him to make some absolutely critical but perfectly rational design choices, the ultimate combination of which is that LMDB outshines pretty much every key-value store ever written.

i mean, if you are running benchmark programs in *python* and getting sequential read access to records at a rate of 2,500,000 (2.5 MILLION) records per second... in a *scripted* programming language for goodness sake... then they have to be doing something right.

the random write speed of the python-based benchmarks showed 250,000 records written per second. the _sequential_ ones managed just over 900,000 per second!

there are several key differences between Berkeley DB's API and LMDB's API. the first is that LMDB can be put into "append" mode (as mentioned above). basically what you do is you *guarantee* that the key of new records is lexicographically greater than all other records. with this guarantee LMDB baiscally lets you put the new record _right_ at the end of its B+ Tree. this results in something like an astonishing 5x performance increase in writes.

the second key difference is that LMDB allows you to add duplicate values per key. in fact i think there's also a special mode (never used it) where if you do guaranteed fixed (identical) record sizes LMDB will let you store the values in a more space-efficient manner.

so it's pretty sophisticated.

from a technical perspective, there are two key differences between LMDB and *all* other key-value stores.

the first is: it uses "append-only" when adding new records. basically this has some guarantees that there can never be any corruption of existing data just because a new record is added.

the second is: it uses shared memory "copy-on-write" semantics. what that means is that the (one allowed) writer NEVER - and i mean never - blocks readers, whilst importantly being able to guarantee data integrity and transaction atomicity as well.

the way this is achieved is that because Copy-on-write is enabled, the "writer" may make as many writes it wants, knowing full well that all the readers will NOT be interfered with (because any write creates a COPY of the memory page being written to). then, finally, once everything is done, and the new top level parent B+ Tree is finished, the VERY last thing is a single simple LOCK, update-pointer-to-top-level, UNLOCK.

so as long as Reads do the exact same LOCK, get-pointer-to-top-level-of-B-Tree, UNLOCK, there is NO FURTHER NEED for any kind of locking AT ALL.

i am just simply amazed at the simplicity, and how this technique has just... never been deployed in any database engine before, until now. the reasons as howard makes clear are that the original research back in the 1960s was restricted to 32-bit memory spaces. now we have 64-bit so shared memory may refer to absolutely enormous files, so there is no problem deploying this technique, now.

all incredibly cool.

about a month ago

Submissions

top

Open Educational Robot for under $50

lkcl lkcl writes  |  about three weeks ago

lkcl (517947) writes "Straight from the crowd-funding page comes news of Hack-E-Bot, described as a "low price and open source robot that hopes to encourage children to learn about engineering, electronics, and programming". Part of the reason for achieving such a low price appears to be down to the use of a tiny $7 off-the-shelf Arduino-compatible board called Trinket from Adafruit. The Trinket (ATTiny328 PIC) press-fits neatly into a supplied breadboard: all connections and any educational experiments can be done entirely without soldering. It's cute, it's under $50, you can pay extra for one to be given free to a child if you want, and there's a lower-cost kit version available if you prefer to use your own embedded board and are prepared to write your own software. I absolutely love the whole idea, and they've already reached the incredibly low $7,000 funding target, so it's going ahead."
top

Python-LMDB in a high-performance environment

lkcl lkcl writes  |  about a month ago

lkcl (517947) writes "In an open letter to the core developers behind OpenLDAP (Howard Chu) and Python-LMDB (David Wilson) is a story of a successful creation of a high-performance task scheduling engine written (perplexingly) in python. With only partial optimisation allowing tasks to be executed in parallel at a phenomenal rate of 240,000 per second, the choice to use Python-LMDB for the per-task database store based on its benchmarks as well as its well-researched design criteria turned out to be the right decision. Part of the success was also due to earlier architectural advice gratefully received here on slashdot. What is puzzling though is that LMDB on wikipedia is being constantly deleted, despite its "notability" by way of being used in a seriously-long list of prominent software libre projects, which has been, in part, motivated by the Oracle-driven BerkeleyDB license change. It would appear that the original complaint about notability came from an Oracle employee as well..."
top

Power-loss-protected SSDs tested: only Intel S3500 passes

lkcl lkcl writes  |  about a year ago

lkcl (517947) writes "After the reports on SSD reliability and after experiencing a costly 50% failure rate on over 200 remote-deployed OCZ Vertex SSDs, a degree of paranoia set in where I work. I was asked to carry out SSD analysis with some very specific criteria: budget below £100, size greater than 16Gbytes and Power-loss protection mandatory. This was almost an impossible task: after months of searching the shortlist was very short indeed. There was only one drive that survived the torturing: the Intel S3500. After more than 6,500 power-cycles over several days of heavy sustained random writes, not a single byte of data was lost. Crucial M4: fail. Toshiba THNSNH060GCS: fail. Innodisk 3MP SATA Slim: fail. OCZ: epic fail. Only the end-of-lifed Intel 320 and its newer replacement the S3500 survived unscathed. The conclusion: if you care about data even when power could be unreliable, only buy Intel SSDs."
Link to Original Source
top

QiMod / Rhombus Tech A10 EOMA-68 CPU Card running Debian 7 (armhf)

lkcl lkcl writes  |  about a year and a half ago

lkcl (517947) writes "With much appreciated community assistance, the first EOMA-68 CPU Card in the series, based on an Allwinner A10 processor, is now running Debian 7 (armhf variant). Two demo videos have been made. Included in the two demos: fvwm2, midori web browser, a patched version of VLC running full-screen 1080p, HDMI output, powering and booting from Micro-HDMI, and connecting to a 4-port USB Hub. Also shown is the 1st revision PCB for the upcoming KDE Flying Squirrel 7in tablet.

The next phase is to get the next iteration of test / engineering samples out to interested free software developers, as well as large clients, which puts the goal of having Free Software Engineers involved with the development of mass-volume products within reach."

Link to Original Source
top

Rhombus Tech 2nd revision A10 EOMA68 Card working samples

lkcl lkcl writes  |  about a year and a half ago

lkcl (517947) writes "Rhombus Tech and QiMod have working samples of the first EOMA-68 CPU Card, featuring 1GByte of RAM, an A10 processor and stand-alone (USB-OTG-powered with HDMI output) operation. Upgrades will include the new Dual-Core ARM Cortex A7, the pin-compatible A20. This is the first CPU Card in the EOMA-68 range: there are others in the pipeline (A31, iMX6, jz4760 and a recent discovery of the Realtek RTD1186 is also being investigated).

The first product in the EOMA-68 family, also nearing a critical phase in its development, will be the KDE Flying Squirrel, a 7in user-upgradeable tablet featuring the KDE Plasma Active Operating System. Laptops, Desktops, Games Consoles, user-upgradeable LCD Monitors and other products are to follow. And every CPU that goes into the products will be pre-vetted for full GPL compliance, with software releases even before the product goes out the door. That's what we've promised to do: to provide Free Software Developers with the opportunity to be involved with mass-volume product development every step of the way. We're also on the look-out for an FSF-Endorseable processor which also meets mass-volume criteria which is proving... challenging."

Link to Original Source
top

Rhombus Tech 2nd revision A10 EOMA68 Card

lkcl lkcl writes  |  about a year and a half ago

lkcl writes "The 2nd revision of the A10 EOMA-68 CPU Card is complete and samples are due soon: one sample is due back with a Dual-Core Allwinner A20. This will match up with the new revision of the Vivaldi Spark Tablet, codenamed the Flying Squirrel. Also in the pipeline is an iMX6 CPU Card, and the search is also on for a decent FSF-Endorseable option. The Ingenic jz4760 has been temporarily chosen. Once these products are out, progress becomes extremely rapid."
Link to Original Source
top

Rhombus Tech AM389x/DM816x EOMA-68 CPU Card started

lkcl lkcl writes  |  about 2 years ago

lkcl writes "The Rhombus Tech Project is pleased to announce the beginning of a Texas Instruments AM389x/DM816x EOMA-68 CPU Card: thanks to earlier work on the A10 CPU Card and thanks to Spectrum Digital, work on the schematics is progressing rapidly. With access to more powerful SoCs such as the OMAP5 and Exynos5 being definitely desirable but challenging at this early phase of the Rhombus Tech initiative, the AM3892 is powerful enough (SATA-II, up to 1600mhz DDR3 RAM, Gigabit Ethernet) to still take seriously even though it is a 1.2ghz ARM Cortex A8. With no AM3892 beagleboard clone available for sale, input is welcomed as to features people would like on the card. The key advantage of an AM3892 EOMA-68 CPU Card though: it's FSF Hardware-endorseable, opening up the possibility — at last — for the FSF to have an ARM-based tablet or smartbook to recommend. Preorders for the AM3892 CPU Card are open."
Link to Original Source
top

Rhombus Tech A10 EOMA-68 CPU Card schematics completed

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "Rhombus Tech's first CPU Card is nearing completion and availability: the schematics have been completed by Wits-Tech. Although it appears strange to be using a 1ghz Cortex A8 for the first CPU Card, not only is the mass-volume price of the A10 lower than other offerings; not only does the A10 classify as "good enough" (in combination with 1gb of RAM); but Allwinner Tech is one of the very rare China-based SoC companies willing to collaborate with Software (Libre) developers without an enforced (GPL-violating) NDA in place. Overall, it's the very first step in the right direction for collaboration between Software (Libre) developers and mass-volume PRC Factories. There will be more (faster, better) EOMA-68 CPU Cards: this one is just the first."
Link to Original Source
top

Google+ Identity Fraud

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "http://en.wikipedia.org/wiki/Nymwars outlines the problem with Google+ as an "identity" service, but nowhere does this page discuss any compelling down-sides for Google themselves. One is the risk of lawsuits where people *relied * on Google+, were lulled into a false sense of security by Google+, failed to follow standard well-established online internet identity precautions, and were defrauded as a *direct* result of Google's claims of "safety". Another is the legal cost of involvement in, and the burden of proof that would fall onto Google in identity-fraud-related cases of online stalking, internet date rape and murder. Can anyone think of some other serious disadvantages that would compel google to rethink its google+ identity policy? I would really like to use Google Hangouts, but I'll be damned if i'll use it under anything other than under my 25-year-established pseudonym, "lkcl". What's been your experience with applying for an "unreal" identity?"
Link to Original Source
top

Pyjamas pyjs.org Domain hijacked

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "The domain name for the pyjamas project, pyjs.org, was hijacked today by some of its users. The reasons: objections over the project leader's long-term goal to have pyjamas development be self-hosting (git browsing, wiki, bugtracker etc. all as Free Software Licensed pyjamas applications). Normally if there is disagreement, a Free Software Project is forked: a new name is chosen and the parting-of-the-ways is done if not amicably but at least publicly. Pyjamas however now appears to have made Free Software history by being the first project to have its domain actually hijacked. rather embarrassingly, in the middle of a publicly-announced release cycle. Has anything like this ever happened before?"
Link to Original Source
top

B2G's Store and Security Model

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "Boot to Gecko is a full and complete stand-alone Operating System that is to use Gecko as both its Window Manager and Applications UI. Primarily targetted at smartphones, security and the distribution of applications are both facing interesting challenges: scaling to mass-volume proportions (100 million+ units). The resources behind Google's app store (effectively unlimited cloud computing) are not necessarily guaranteed to be available to Telcos that wish to set up a B2G store. Although B2G began from Android, Mozilla's primary expertise in the development of Gecko and in the use of SSL is second to none. There is howevera risk that the B2G Team will rely solely on userspace security enforcement (in a single executable) and to try inappropriate use of CSP, Certificate pinning and other SSL techniques for app distribution, resulting in some quite harmful consequences that will impact B2G's viability. The question is, therefore: what security infrastructure surrounding the stores themselves as well as in the full B2G OS itself would actually be truly effective in the large-scale distribution of B2G applications, whilst also retaining flexibility and ease of development that would attract and retain app writers?"
Link to Original Source
top

EOMA-PCMCIA modular computer aiming for $15 and Fr

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "An initiative by a CIC company Rhombus Tech aims to provide Software (Libre) Developers with a PCMCIA-sized modular computer that could end up in mass-volume products. The Reference Design mass-volume pricing guide from the SoC manufacturer, for a device with similar capability to the raspberrypi, is around $15: 40% less than the $25 rbpi but for a device with an ARM Cortex A8 CPU 3x times faster than the 700mhz ARM11 used in the rbpi. GPL Kernel source code is available. A page for community ideas for motherboard designs has also been created. The overall goal is to bring more mass-volume products to market which Software (Libre) Developers have actually been involved in, reversing the trend of endemic GPL violations surrounding ARM-based mass-produced hardware. The Preorder pledge registration is now open (account creation required)."
Link to Original Source
top

Where are the Ultra-efficient production Hybrid EV

lkcl lkcl writes  |  about 3 years ago

lkcl writes "Has anyone else wondered why ultra-efficient hybrid vehicles have to look like this, why the Twizy doesn't have doors as standard and has leased batteries, or why the Volkswagen XL1 does 313mpg but only seats 2 people and isn't yet in production? Why were both Toyota's RAV4-EV as well as GM's EV1 not just discontinued but destroyed? Against this background, what makes this 3-seat Hybrid EV design different, and what could make it successful? Although this article on hybridcar.com outlines the problem, the solution isn't clear-cut, so how can ultra-efficient affordable hybrids actually end up on the road?"
Link to Original Source
top

An accidental Free Software Accelerated 3D GPU

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "In evaluating the Xilinx Xilinx Zynq-7000 for use in a FSF Hardware-endorsed Laptop and possible OpenPandora v2.0, a series of Free Software projects were accidentally linked together — Gallium3D and LLVM 2.7's MicroBlaze FPGA Target. The combination is the startling possibility that the Xilinx Zynq-7000 may turn out to be the perfect platform for a Free Software 3D GPU, for use in Tablets, Laptops, and the OpenGraphics Project. entirely by accident."
Link to Original Source
top

RISC Notebooks: does 28nm make all the difference?

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option. The Toshiba AC100 and the HP/Compaq Airlife 100 are classic examples.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. The MIPS GS464V, designed by China's ICT, has such high SIMD Vector performance that it will be capable of 100fps 1080p at 1ghz on a single core, and has hardware assisted accelerated emulation of over 200 x86 instructions. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. And Gaisler have a SPARC-compatible core, the LEON4, which can be configured in anything up to 8 cores, and run at up to 1.5ghz in 30nm, giving an impressive 1.7DMIPS/Mhz performance per core that matches that of both the MIPS 1074k and the ARM Cortex A9 designs.

Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around an estimated $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s, bringing the price of an impressive desktop system easily down to $80 retail and a decent laptop to $150.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210, and for Nusmart's 2ghz 2816? Why are we not seeing any products with decent screens and memory from mainstream companies like Dell, IBM and HP, but are instead seeing a rash of low-performance low-quality GPL-violating Chinese-made Android-based knock-offs, touted as "web-ready", with webcams and microphones that don't even work?

What's it going to take for these alternative processors to hit mainstream? Do we really have to wait for 24nm or less, where it would be possible to run these RISC cores at ungodly 4ghz speeds or above, when 20,000 tiny RISC cores could fit on a single wafer resulting in prices of $4 to $5 per CPU? Or, with the rise of Android and GNU/Linux Operating Systems, would a lowly 28nm multi-core RISC-based System-on-a-Chip be enough for most peoples' needs?"

Link to Original Source
top

ARM or MIPS Notebooks: does 28nm make a difference

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210 and for Nusmart's 2ghz 2816?"

Link to Original Source
top

FreedomBox Foundation hits target in 5 days

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "The FreedomBox Foundation hit its minimum target of $60,000 in just 5 days, thanks to KickStarter Pledges, and seeks further contributions to ensure that the Project is long-term viable. Curiously but crucially, the FreedomBox fund is for Software only, yet neither suitable low-cost $30 ARM or MIPS "plug computers", envisaged by Eben Moglen as the ideal target platform, nor mid-to-high-end ARM or MIPS low-cost developer-suitable laptops actually exist. What do slashdot readers envisage to be the way forward, here, given that the goals of the FreedomBox are so at odds with mass-market Corporate-driven hardware design decisions?"
Link to Original Source
top

Toshiba AC100 Linux 2.6.29 Kernel Source available

lkcl lkcl writes  |  more than 4 years ago

lkcl (517947) writes "Toshiba Digital Media Group, Japan, kindly responded to a request for all GPL source code and supplied it on CD. The kernel source has been uploaded to the arm-netbook alioth git repository (branch ac100/2.6.29/lkcl). The AC100 has already been hacked, rooted and sadly ubuntu'd as noted on debian-arm. Availability of the "official" kernel source should make getting WIFI etc. somewhat easier. Two key questions remain, though: why does such a fantastic machine with a top-end dual core ARM Cortex A9 CPU only come with 512mb of RAM, and why supply only the truly dreadful and unusable 1024x600 resolution LCD when it is known to be the cause of so many negative reviews?"
Link to Original Source
top

Open University Linux Course Irony

lkcl lkcl writes  |  more than 4 years ago

lkcl (517947) writes "A new Open University course, Linux T155 aims to teach the benefits of Linux and Free Software, including the philosophy and history as well as the practical benefits of being virus-free and being able to prolong the working life of hardware. Unfortunately, in a delicious piece of irony, potential Tutors who stand by Free Software principles and thus are best suited to apply for a teaching post must violate the very principles they are expected to instil, by filling in a Microsoft Word formatted application form. An article on the Advogato Free Software Advocacy site describes the ways in which changing the "accidental" policy of using Proprietary File formats has succeeded and where it has failed."
Link to Original Source
top

python converted to javascript: executed in-browse

lkcl lkcl writes  |  more than 5 years ago

lkcl writes "Two independent projects Skulpt and Pyjamas are working to bring python to the web browser (and the javascript command-line) the hard way: as javascript. Skulpt already has a cool python prompt demo on its homepage; Pyjamas has a gwtcanvas demo port and a GChart 2.6 demo port. Using the 64-bit version of google v8 and PyV8, Pyjamas has just recently successfully run its python regression tests, converted to javascript, at the command-line. (Note: don't try any of the above SVG demos with FF2 or IE6: they will suck.)"
Link to Original Source

Journals

lkcl has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?