Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Scientists Seen As Competent But Not Trusted By Americans

lkcl trust vs respect (451 comments)

Scientists have earned the respect of Americans but not necessarily their trust,' said lead author Susan Fiske, the Eugene Higgins Professor of Psychology and professor of public affairs

it was only fairly recently that someone explained the absolutely crucial difference between trust and respect, and it knocked me sideways. i used to always accept the "wisdom" that trust is EARNED.

trust - literally by definition- CANNOT be EARNED.

*respect* can be earned, because to respect someone (or something) you learn from PAST experience and PAST actions, you make a judgement call "that thing (or person) did something cool [in the PAST], and i liked it."

trust - by definition - refers to the FUTURE. i am - in the FUTURE - going to give someone the power and authority to do something. i (the person doing the trusting) actually have absolutely NO CLUE as to whether in the FUTURE, regardless of PAST performance, the person will do what they say that they can do.

how on earth can _anyone_ say, "you earned (past tense) my trust (future decision-making)"????

this is how wars are started (and sustained), by people confusing past and present in relation to trust and respect.

so this is where it gets interesting, because the original article is actually making TWO completely SEPARATE and distinct statements:

1) the american public has analysed the PAST actions of scientists, and finds that those actions are [in some way] cool enough to be respected (past tense)

2) the american public has, within themselves, insufficient knowledge about what it is that scientists do - and this has absolutely nothing to do with the scientists but EVERYTHING to do with "the american public" - in order to take the [frightening!] step of placing their trust in the FUTURE decision-making of some individuals-that-happen-to-be-scientists.

i cannot emphasise enough that a decision *to* trust has absolutely nothing to do with the person or thing that you are trusting. the *decision* to place trust in someone else really *really* is something that has absolutely nothing to do with the *analysis* of whether *to* trust.

this is where people get terribly confused. they do some analysis (based usually on past performance), and then they have to make a decision. they *believe* that the [past] analysis *IS* trust. it's not!! even once the [past] analysis has been done, you *still* need to take that step - to trust.

the link between respect and trust is that it is *usually* the respect that we have for people which tips our analysis in favour of certain individuals. but the analysis is NOT respect itself, just as trust (the decision to trust) is not the same thing as respect _either_.

now what i find ironic is that it is someone with a degree in psychology that is talking about trust being "earned". if someone whom the american public implicitly "trusts" (because they have a PhD) is saying "trust is earned" then how is anyone else supposed to know the difference between trust and respect??

2 days ago
top

Ask Slashdot: Multimedia-Based Wiki For Learning and Business Procedures?

lkcl custom coding time (97 comments)

i wrote a video upload and playback system for a christian-based financial advice organisation that was uncomfortable with the idea of having youtube advertising messages in direct contravention of the advice that they were giving their clients.

the "normal" way to do what you are asking would be to simply have a plugin that allows you to specify the youtube URL, and it would be embedded... this is not very hard to do, and, if there is not something out there already, consider paying a programmer to do it. they should not take very long [of the order of days].

however... if, like the christian-based financial advise organisation that i had to create an entire video upload, storage and playback system for the use of youtube is completely inappropriate for your organisation (because the videos are to be kept confidential for example) then there really isn't anything out there (i looked) and you will need to write your own.

for this task you should allocate at least two to three months, if you have access to good programmers, bearing in mind that you will need both front-end developers as well as back-end server capable engineers. one of the problems to solve (in basically reinventing youtube) is that the videos need to be converted to several different formats in order to make it possible to play them back on multiple browser engines.

if this is the path you've chosen then i can help save you some time. but please think carefully about what it is that you need. as a number of other people have pointed out you've said "i need a wiki to store videos" when actually what you _should_ have said is "what's the best way to offer people in-house training videos" and qualified that potentially with a list of options such as "my budget is $X" and "my time is Y" and "my in-house skill-set is A B and C".

2 days ago
top

Developing the First Law of Robotics

lkcl Re:I, Robot from a programmers perspective (165 comments)

Don't get me started on Asimov's work. He tried to write allot about how robots would function with these laws that he invented, but really just ended up writing about a bunch of horrendously programmed robots who underwent 0 testing and predictably and catastrophically failed at every single edge case. I do not think there is a single robot in any of his stories that would not not self destruct within 5 minutes of entering the real world.

hooray. someone who actually finally understands the point of the asimov stories. many people reading asimov's work do not understand that it was only in the later works commissioned by the asimov foundation (when Caliban - a Zero-Law Robot - is introduced; or it is finally revealed that Daneel - the robot that Giskard psychically impressed with the Zeroth Law to protect *humanity* onto - is over 30,000 years old and is the silent architect of the Foundation) that the failure of the Three Laws of Robotics is finally explicitly spelled out in actual words instead of being illustrated indirectly through many different stories, just as you describe, wisnoskij.

in the asimov series there _are_ actually robots that are successful. the New Law Robots (those that are permitted to *cooperate* with humans; these actually have some spark of creativity). Caliban - who had a Gravitonic brain - was a Zero Law Robot: an experiment to see if a robot would derive its own laws under free will (it did). and Daneel, whose telepathic ability and the Zeroth Law were given to him by Giskard. these robots are the exception. the three law robots are basically intelligent but entirely devoid of creativity.

you have to think: how can anything that has hundreds of millions of copies of the three laws be anything *but* a danger to human development, by preventing and prohibiting any kind of risk-taking?? we already have enough stupid laws on the planet (mostly thanks to america's sue-happy culture and the abusive patent system). we DON'T need idiots trying to implement the failed three laws of robotics.

about two weeks ago
top

Industry-Based ToDo Alliance Wants To Guide FOSS Development

lkcl COM (MSRPC), Objective-C/J and Software Libre (54 comments)

in looking at why both apple and microsoft have been overwhelmingly successful i came to the conclusion that it is because both companies are using dynamic object-orientated paradigms that can allow components from disparate programming languages to be accessible at runtime. COM is the reason why, after 20 years, you can find a random Active-X component written two decades ago, plug it into a modern windows computer and it will *work*.

Objective-C is the OO concept taken to the extreme: it's actually built-in to the programming language. COM is a bit more sensible: it's a series of rules (based ultimately on the flattening of data structures into a stream that can be sent over a socket, or via shared memory) which may be implemented in userspace: the c++ implementation has some classes whilst the c implementation has macros, but ultimately you could implement COM in any programming language you cared to.

the first amazing thing about COM (which is based on MSRPC which in turn was originally the OpenGroup's BSD-licensed DCE/RPC source code) is that because it is on top of DCE/RPC (ok MSRPC) you have version-control at the interface layer. the second amazing thing is that they have "co-classes" meaning that an "object" may be "merged" with another (multiple inheritance). when you combine this with the version-control capabilities of DCERPC/MSRPC you get not only binary-interoperability between client and server regardless of how many revisions there are to an API but also you can use co-classes to create "optional parameters" (by combining a function with 3 parameters in one IDL file with another same-named function with 4 parameters in another IDL file, 5 in another and so on).

the thing is that:

a) to create such infrastructure in the first place takes a hell of a lot of vision, committment and guts.

b) to mandate the use of such infrastructure, for the good of the company, the users, and the developers, also takes a lot of committment and guts. when people actually knew what COM was it was *very* unpopular, but unfortunately at the time things like python-comtypes (which makes COM so transparent it has the *opposite* problem - that of being so easy that programmers go "what's all the fuss about???" and don't realise quite how powerful what they are doing really is)

both microsoft and apple were - are - companies where it was possible to make such top-down decisions and say "This Is The Way It's Gonna Go Down".

now let's take a look at the GNU/Linux community.

the GNU/Linux community does have XPIDL and XPCOM, written by the Mozilla Foundation. XPCOM is "based on" COM. XPCOM has a registry. it has the same API, the same macros, and it even has an IDL compiler (XPIDL). however what it *does not* have is co-classes. co-classes are the absolute, absolute bed-rock of COM and because XPCOM does not have co-classes there have been TEN YEARS of complaints from developers - mostly java developers but also c++ developers - attempting to use Mozilla technology (embedding Gecko is the usual one) and being driven UP THE F******G WALL by binary ABI incompatibility on pretty much every single damn release of the mozilla binaries. one single change to an IDL file results, sadly, in a broken system for these third party developers.

the GNU/Linux community does have CORBA, thanks to Olivetti Labs who released their implementation of CORBA some time back in 1997. CORBA was the competitor to COM, and it was nowhere near as good. Gnome adopted it... but nobody else did.

the GNU/Linux community does have an RPC mechanism in KDE. its first implementation is known famously for having been written in 20 minutes. not much more needs to be said.

the GNU/Linux community does have gobject. gobject is, after nearly fifteen years, beginning to get introspection, and this is beginning to bubble up to the dynamic programming languages such as python. gobject does not have interface revision control.

the GNU/Linux community does actually have a (near full) implementation of MSRPC and COM: it's part of the Wine Project. the project named TangramCOM did make an attempt to separate COM from Wine: if it had succeeded it would be maintained as a cut-down fork of the Wine Project. The Wine Project developer's answer - if you ask - to making a GNU/Linux application use COM is that you should convert it to a Wine (i.e. a Win32) application. this is not very satisfactory.

in other words, the GNU/Linux community has a set of individuals who are completely discoordinated, getting on with the very important task - and i mean that absolutely genuinely - the very important task of maintaining the code for which they are responsible.

the problems that they deal with are *not* those of coordinating - at a top level - with *other projects*.

now, whilst this "Alliance" may wish to "guide" the development of the GNU/Linux community, ultimately it comes down to money. do these companies have the guts to say - in a nice way of course - "here's a wad of cash, this is a list of tasks, any takers?"

but, also, does this "Alliance" have the guts to ask "what is actually needed"? would it be nice, for example, rather than them saying "this is what you need to do, now get on with it", which would pretty much guarantee to have no takers at all, would it be nice for them to actually get onto various mailing lists (hundreds if necessary) and actually canvas the developers in the software libre world, to ask them "hey, we have $NNN million available, we'd like to coordinate something that's cross-project that would make a difference, and we'd like *you* to tell *us* what you think is the best way to spend that money".

where the kinds of ideas floated around could be something as big and ambitious as "converting both KDE and Gnome to use the same runtime-capable object-orientated RPC mechanism so that both desktops work nicely together and one set of configuration tools from one desktop environment could actually be used to manage the other... even over a network with severely limited bandwidth [1]".

or, another idea would be: ensure that things like heartbleed never happen again, because the people responsible for the code - on which these and many companies are making MILLIONS - are actually being PAID.

but the primary question that immediately needs answering: is this group of companies acting genuinely altruistically, or are they self-serving? an immediate read of the web site, at face value, it does actually look like they are genuine.

however, time will tell. we'll see when they actually start interacting with software libre developers rather than just being a web site that doesn't even have a public mailing list.

[1] i mention that because the last time i suggested this idea people said "what's wrong with using X11?? problem solved... so what are you talking about?? i'm talking about binary-compatible APIs that stem ultimately from IDL files". *sigh*...

about two weeks ago
top

German Court: Google Must Stop Ignoring Customer E-mails

lkcl define "customer" (290 comments)

from what i understand of the definition of "customer", a "customer" means "someone who is paying for a service". here, there's no payment involved, therefore there is no contract of sale. i would imagine that it's fairly safe to say that we're most definitely *not* quotes customers of google quotes.

if on the other hand these individuals are actually _paying_ google for service and are not receiving a response, _then_ i could understand.

about three weeks ago
top

Stallman Does Slides -- and Brevity -- For TEDx

lkcl Re:Where to draw the line (326 comments)

there is a beautiful tale which i will share with you, which helps to explain why what Dr Stallman is doing is so important:

"the reasonable man adapts himself to the world. the unreasonable man adapts the world to himself. therefore, all progress depends on the unreasonable man".

now, if it wasn't for Dr Stallman, the average pathological corporation (see the first few minutes of the documentary "The Corporation") would take whatever it could get (and you only have to look at the 98% endemic GPL violations on android smartphones and tablets to see the consequences of non-GPL software such as android)

so if it wasn't for Dr Stallman sticking to his principles, you would probably be using a computer that crashes 10 to 15 times a day for anything but the most mundane of tasks, and was entirely outside of your control.

about three weeks ago
top

Research Shows RISC vs. CISC Doesn't Matter

lkcl Re:so why is intel's 14nm haswell still at 3.5 wat (161 comments)

You seem to be conveniently ignoring Intel's Atom and Quark lines. They're all x86 and none of them has a TDP larger than 3w.

i'm not. intel's quark line - the one i saw announced on here last year - tops out at 400mhz. it has... nothing in the way of interfaces that can be taken seriously. it doesn't even have RGB/TTL video out. however if you are right about the latest intel atom being 3w, then now i am interested! so i am very grateful for you pointing this out, i will go check.

about 1 month ago
top

Research Shows RISC vs. CISC Doesn't Matter

lkcl Re:so why is intel's 14nm haswell still at 3.5 wat (161 comments)

Here is your answer, the A20 is freakishly slow compared to anything Intel would put their name on.

Granted, you can build a tablet to do specific tasks (like decoding video codecs) around a really slow processor and some special-purpose DSPs. But perhaps the companies in that business aren't making enough profit to interest Intel.

interestingly that assumption - that allwinner is not making enough profit - is completely wrong. allwinner is now one of _the_ dominant tablet SoC manufacturers in the world. their first revision (the A10, which was a Cortex A8) actually caused a major recession in the electronics industry when it first came out, as it was only $7.50 compared to the nearest competitor at around $11 to $12. everyone *not* using the A10 at the time was left holding worthless components; contracts for supply were reneged on; the change was so quick that many factories and design houses simply went out of business.

the volumes that allwinner are shipping are simply enormous, and, along with rockchip, their nearest competitor, the tablet market is completely and utterly overwhelmingly dominated by processors of the type that you describe as "built to do specific tasks".

those "specific tasks" include "running the android OS at a pace that's good enough for the overwhelming majority of end-users".

in short, intel has a long *long* way to go before they can even remotely consider that they have a processor that can be taken seriously in this very large market, both in terms of price and also in terms of performance.

what is particularly interesting about the comment that you make is that it would seem that intel really does, just as you do, believe that "a really slow processor and some special-purpose DSPs" simply is... not enough. and, contrary to that belief, it can be quite clearly seen by the total dominance of allwinner and rockchip that "a really slow processor and some special-purpose DSPs" really *is* enough.

one of the reasons for that is because if you look at the market you find that you need:

* audio and video CODEC processing. this can be handled by a special-purpose DSP. some of these are now handling 3D 4096-bit-wide screens.

* 3D graphics. these are handled by licensing a whole range of hard macros (special-purpose DSPs) that come with proprietary libraries implementing OpenGL ES 2.0. they're good enough, and some of them are getting _really_ good.

* an (as you put it) "really slow processor" - although if you look at allwinner's latest processor the A80 it can hardly be called "slow", it's an 8 core monster - which covers the running of the general OS.

overall these processors are graded according to price: $5 will get you something dreadful but "good enough", $20 will get you something that's complete overkill for a tablet.

and you know what? the $7 1.2ghz dual-core ARM Cortex A7 Allwinner A20 is, when it's put with 2gb of RAM, actually extremely quick. i tested out 1gb of RAM running debian GNU/Linux: i fired up xrdp and i had *five* rdesktop sessions running OpenOffice and Firefox on it, onto my laptop. it didn't fall over, and it wasn't dreadfully slow.

so i think you, just like intel, are completely and entirely missing the point. and in intel's case, that means entirely missing out on a *huge* market segment.

about 1 month ago
top

Update: Raspberry Pi-Compatible Development Board Cancelled

lkcl access to broadcom chips (165 comments)

for the rhombus-tech project i also contacted broadcom, to ask for access to one of their chips (this was before the raspberry pi). i can confirm that, just as other people are reporting, the conversation basically indicates that broadcom as a company doesn't wish to make money.

about a month ago
top

Research Shows RISC vs. CISC Doesn't Matter

lkcl so why is intel's 14nm haswell still at 3.5 watts? (161 comments)

ok, so the effect of RISC vs CISC has absolutely *no* relation to power, right? so why in god's green earth is, for example, the allwinner a20 1.2ghz processor - which is still in 40nm btw - maxing out at 2.5 watts and delivering great 1080p video, reasonable 3D graphics and so on - yet intel is having to go to 14nm and, even at 14nm they STILL can't release a processor that, if you run it in a very limited configuration, is STILL listed as 3.5 watts??

there's a quad-core rockchip 28nm SoC. maximum (actual) top power consumption: below 3.0 watts. intel's haswell tablet SoC is 20nm: it's 4.5 watts "Scenario" Design Power i.e. if you only run certain apps in certain ways it *might* keep below 4.5 watts.

i really _really_ want to know why it is that intel cannot deliver an SoC that has an absolute peak limit of 2.5 watts.

about a month ago
top

Linux Needs Resource Management For Complex Workloads

lkcl Re:complex application example (161 comments)

hi mr thinly-sliced, thank you this is awesome advice, really really appreciated.

about 2 months ago
top

Linux Needs Resource Management For Complex Workloads

lkcl Re:complex application example (161 comments)

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:

questions:

* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

about 2 months ago
top

Linux Needs Resource Management For Complex Workloads

lkcl complex application example (161 comments)

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

about 2 months ago
top

Pseudonyms Now Allowed On Google+

lkcl legal ramifications of identity verification (238 comments)

i think one of two things happened, here. first is that it might have finally sunk in to google that even just *claiming* to have properly verified user identities leaves them open to lawsuits should they fail to have properly carried out the verification checks that other users *believe* they have carried out. every other service people *know* that you don't trust the username: for a service to claim that they have truly verified the identity of the individual behind the username is reprehensibly irresponsible.

second is that they simply weren't getting enough people, so have quotes opened up the doors quotes.

about 2 months ago
top

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:Hardware is hard (71 comments)

Read "hard" as "Expensive as Hell"

That is part of it yes. It requires a wide range of differently experienced people: low level software, high level software, circuit design, assembly, layout, component sourcing, factory liasion, DFt, Manufacturing etc.

Then you need to get them all to work together. And you have to pay them.

... ynow... one of the reasons i came up with the idea to design mass-volume hardware that would be eco and libre friendly was because, after having developed the experience to deal with both low-level software and high-level software, and having done some circuit design at both school and university, i figured that the rest should not be too hard to learn... or manage.

  you wanna know the absolute toughest part [apart from managing people?] it's the component sourcing. maan, is that tough. if you want a laugh [out of sheer horror, not because it was actually funny] look up the story on how long it took to find a decently-priced mid-mount micro HDMI type D [8 months].

  so anyway, i set out to find people with the prerequisite skills that i *didn't* have, offered them a chance to participate and profit. the list of people who have helped and then fallen by the wayside... i... well.... i want to succeed at this so that i can give them something in return for what they did.

about 3 months ago
top

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:Would it kill you to hint at what Improv is (wa (71 comments)

If only there was some way to get more information, perhaps with a sort of "link" of some kind to a more detailed description.

here is the [old] specification of the [revision 1] CPU Card:
http://rhombus-tech.net/allwin...

the current revision 2 which i am looking for factories to produce (RFQs sent out already) we will try with 2gb of RAM. this is just a component change not a layout change so chances of success are high.

here is the [old] specification of the Micro-Engineering Board:
http://rhombus-tech.net/commun...

that was our "minimal test rig" which helped verify the interfaces on the first CPU Cards (and will help verify the next ones as well, with no further financial outlay needed. ever. ok, that would be true if i hadn't taken the opportunity to change the spec before we go properly live with it!! you only get one shot at designing a decade-long standard.... i'd rather get it right)

this will be the basis of the planned crowd-funding campaign: it's more of a micro-desktop PC:
http://rhombus-tech.net/commun...

the micro-desktop chassis is very basic: VGA, 2x USB, Ethernet, Power In (5.5 to 21V DC). all the other interfaces are on the CPU Card (USB-OTG, Micro-HDMI, Micro-SD). however unlike the Micro-Engineering Board, the power is done with a view to the average end-user (as is the VGA connector which means 2 independent screens, straight out the box).

does that help answer the question?

about 3 months ago
top

Improv Project, Vivaldi Tablet Officially Dead

lkcl Re:What was desirable about it? (71 comments)

Open hardware sounds cool, but as others have noted, good hardware design is both difficult and expensive. Considering how rapidly the components advance (CPU/SoC, I/O, displays, etc.),

aaaah gotcha! that's the _whole_ reason why i designed the long-term modular standards, so that products *can* be split around the arms race of CPU/SoC on the one hand and battery life / display etc. on the other.

and the factory that we are in touch with (the big one), they _love_ this concept, because the one thing that you might not be aware of is that even the big guys cannot react fast enough nowadays.

imagine what it would mean to them to be able to buy HUGE numbers of CPUs (and related components), drop them into a little module that they KNOW is going to work across every single product that conforms to the long-term standard. in 6 months time there will be a faster SoC, more memory, less power, but that's ok, because *right now* they can get better discounts on the SoC that's available *now*.

on the other side of the interface, imagine what it would mean to them that they could buy the exact same components for a base unit for well... three to five years (or until something better came along or some component went end-of-life)?

it took them a while, but they _loved_ the idea. the problem is: as a PRC State-Sponsored company they are *prohibited* from doing anything other than following the rules... i can't tell you what those rules are: they're confidential, but it meant that we had to find other... creative ways to get the designs made.

We're in a world where a first generation Nexus 7 tablet sells for $140 or less. At Walmart.

yeah. now that prices are dropping, just like the PC price wars, the profits are becoming so small that the manufacturers are getting alarmed (or just dropping out of the market entirely). those people are now looking for something else. they're willing to try something that might get them a profit. what should we tell them?

anyway: thank you for your post, darylb, it provides a very useful starting point for some of the key insights i want to get across to people.

about 3 months ago
top

Improv Project, Vivaldi Tablet Officially Dead

lkcl moving forward: next crowdfunding launch (71 comments)

short version: the plan is to carry on, using the lessons learned to
try again, with a crowd-funding campaign that is transparent. please
keep an eye on the mailing list, i will also post here on slashdot
when it begins.

http://lists.phcomp.co.uk/pipe...

long version:

this has been a hugely ambitious venture, i think henrik's post explains much:
http://lists.phcomp.co.uk/pipe...

the - extremely ambitious - goal set by me is to solve a huge range of
issues, the heart of which is to create environmentally-conscious
mass-volume appliances that software libre developers are *directly*
involved in at every step of the way.

so, not to be disparaging to any project past or future, but this isn't
"another beagleboard", or "another raspberry pi beater": it's a way to
help the average person *own* their computer appliances and save
money over the long term. software libre developers are invited
to help make that happen.

by "own" we mean "proper copyright compliance, no locked boot
loaders and a thriving software libre environment that they can
walk straight into to help them do what they want with *their*
device... if they want to".

the actual OS installed on the appliance will be one that is
relevant for that appliance, be it ChromeOS, Android, even
Windows or MacOSX. regardless of the pre-installed OS, the
products i am or will be involved in *will* be ones that Software
Libre Developers would be proud to own and would recommend
even to the average person.

by "saving money over the long term" we mean "the device is
split into two around a stable long-term standard
with a thriving second-hand market on each side, with new
CPU Cards coming along as well as new products as well.
buy one CPU Card and one product, it'll be a little bit more
expensive than a monolithic non-upgradeable product,
but buy two and you save 30% because you only need
one CPU Card. break the base unit and instead of the whole
product becoming land-fill you just have to replace the base,
you can transfer not just the applications and data but
the *entire computer*".

it was the environmental modular aspects as well as
the committment to free software *and* the desire to reach
mass-volume levels that attracted aaron to the Rhombus Tech
project.

perhaps unsurprisingly - and i take responsibility for this - the
details of the above did not translate well into the Improv
launch. the reason i can say that is because even henrik,
who has been helping out and a member of the arm netbooks
mailing list for quite some time, *still* has not fully grasped
the full impact of the technical details behind the standards

(hi henrik, how are ya, thank you very very much for helping
with the boot of the first A10 / A20 CPU card, your post on
the mailing list last week was very helpful because it shows
that i still have a long way to go to get the message across
in a short concise way).

the level of logical deduction, the details that need to be taken
into account, the number of processors whose full specifications
must be known in order to make a decent long-term stable
standard.... many people i know reading that sentence will think i
am some sort of self-promoting egotistical dick but i can tell you
right now you *don't* want to be holding in your head the
kinds of mind-numbing details needed to design a long-term
mass-volume computing standard. it's fun... but only in a
masochistic sort of way!

anyway. i did say long, so i have an excuse, but to get to the
point: now that the money is being returned, we can start again
with a new campaign - using a crowdfunding site that shows
numbers, and starts with a lower target (250) that offers more value
for that same amount of money to everyone involved as various
stretch goals (500, 1,000, 2500) are achieved. these will include
casework, FCC Certification, OS images prepared and, most
importantly as far as i am concerned, one of the stretch goals
i feel should be a substantial donation to the KDE Team in
recognition of the help - through some tough lessons if we are
honest - that they have given, as well as the financial outlay
that they've put forward because they believed in what we're
doing.

i'd like to hear people's thoughts and advice, here, because this
really is an exceptionally ambitious project that no commercial
company let alone a software-libre group would ever consider,
precisely because it requires a merging of *both* commercial
aspects *and* software libre principles and ethics. the
environmental angle and long-term financial savings are what
sells it to the end-users though.

about 3 months ago
top

Ask Slashdot: What Inspired You To Start Hacking?

lkcl a Commodore Pet 3032 (153 comments)

1978, aged 8, our school had a commodore pet 3032. i typed in a simple program in BASIC, 10 for i = 1 to 40, 20 print tab(i), i 30 next i, 40 goto 10 and watched the numbers 1 to 40 scroll across the screen. i figured "huh that was obvious, i can do that" and 25 years later i was reverse-engineering NT 4.0 Domains network traffic (often literally one bit at a time) by the same kind of logical inference of observing results and deducing knowledge.

by 2006 i learned that there is something called "Advaita Vedanta" which is crudely known in the west as "espistemology". Advaita Vedanta basically classifies knowledge (there are several types: inference is just one of them), and knowing *that* allows you to have the confidence in your abilities. up until i heard about Advaita Vedanta i was "hacking blind and instinctively", basically. now i know that reverse-engineering is basically an extreme form of knowledge inference. which is kinda cool.

about 4 months ago
top

Ask Slashdot: What Inspired You To Start Hacking?

lkcl Re:Dark Reign (153 comments)

Anybody here ever play that game?

yeah, me! were you around in 1995-1996 by any chance? in CB1 Cafe in cambridge UK i was the person who discovered that you could put zombies into the underground phase-tunnel vehicles, then sneak behind enemy lines (the underground vehicle could see "up" into one square at a time). i would go looking for artillery because artillery by default had a reaaally nasty habit of auto-firing at close-range enemies on a huuge delay. so, what would happen was: first zombie went up, artillery would turn and begin loading, zombie would go to nearest artillery craft and suicide, blowing up several. all artillery would fire, blowing up even more. second zombie up, artillery lock-and-load, zombie makes a beeline for.... you get the idea.

anyway the idea was good enough that it ended up on the hints-and-tips page. turns out that the people who we played were some of the people who worked at activision :)

about 4 months ago

Submissions

top

Power-loss-protected SSDs tested: only Intel S3500 passes

lkcl lkcl writes  |  about 9 months ago

lkcl (517947) writes "After the reports on SSD reliability and after experiencing a costly 50% failure rate on over 200 remote-deployed OCZ Vertex SSDs, a degree of paranoia set in where I work. I was asked to carry out SSD analysis with some very specific criteria: budget below £100, size greater than 16Gbytes and Power-loss protection mandatory. This was almost an impossible task: after months of searching the shortlist was very short indeed. There was only one drive that survived the torturing: the Intel S3500. After more than 6,500 power-cycles over several days of heavy sustained random writes, not a single byte of data was lost. Crucial M4: fail. Toshiba THNSNH060GCS: fail. Innodisk 3MP SATA Slim: fail. OCZ: epic fail. Only the end-of-lifed Intel 320 and its newer replacement the S3500 survived unscathed. The conclusion: if you care about data even when power could be unreliable, only buy Intel SSDs."
Link to Original Source
top

QiMod / Rhombus Tech A10 EOMA-68 CPU Card running Debian 7 (armhf)

lkcl lkcl writes  |  about a year ago

lkcl (517947) writes "With much appreciated community assistance, the first EOMA-68 CPU Card in the series, based on an Allwinner A10 processor, is now running Debian 7 (armhf variant). Two demo videos have been made. Included in the two demos: fvwm2, midori web browser, a patched version of VLC running full-screen 1080p, HDMI output, powering and booting from Micro-HDMI, and connecting to a 4-port USB Hub. Also shown is the 1st revision PCB for the upcoming KDE Flying Squirrel 7in tablet.

The next phase is to get the next iteration of test / engineering samples out to interested free software developers, as well as large clients, which puts the goal of having Free Software Engineers involved with the development of mass-volume products within reach."

Link to Original Source
top

Rhombus Tech 2nd revision A10 EOMA68 Card working samples

lkcl lkcl writes  |  about a year and a half ago

lkcl (517947) writes "Rhombus Tech and QiMod have working samples of the first EOMA-68 CPU Card, featuring 1GByte of RAM, an A10 processor and stand-alone (USB-OTG-powered with HDMI output) operation. Upgrades will include the new Dual-Core ARM Cortex A7, the pin-compatible A20. This is the first CPU Card in the EOMA-68 range: there are others in the pipeline (A31, iMX6, jz4760 and a recent discovery of the Realtek RTD1186 is also being investigated).

The first product in the EOMA-68 family, also nearing a critical phase in its development, will be the KDE Flying Squirrel, a 7in user-upgradeable tablet featuring the KDE Plasma Active Operating System. Laptops, Desktops, Games Consoles, user-upgradeable LCD Monitors and other products are to follow. And every CPU that goes into the products will be pre-vetted for full GPL compliance, with software releases even before the product goes out the door. That's what we've promised to do: to provide Free Software Developers with the opportunity to be involved with mass-volume product development every step of the way. We're also on the look-out for an FSF-Endorseable processor which also meets mass-volume criteria which is proving... challenging."

Link to Original Source
top

Rhombus Tech 2nd revision A10 EOMA68 Card

lkcl lkcl writes  |  about a year and a half ago

lkcl writes "The 2nd revision of the A10 EOMA-68 CPU Card is complete and samples are due soon: one sample is due back with a Dual-Core Allwinner A20. This will match up with the new revision of the Vivaldi Spark Tablet, codenamed the Flying Squirrel. Also in the pipeline is an iMX6 CPU Card, and the search is also on for a decent FSF-Endorseable option. The Ingenic jz4760 has been temporarily chosen. Once these products are out, progress becomes extremely rapid."
Link to Original Source
top

Rhombus Tech AM389x/DM816x EOMA-68 CPU Card started

lkcl lkcl writes  |  about 2 years ago

lkcl writes "The Rhombus Tech Project is pleased to announce the beginning of a Texas Instruments AM389x/DM816x EOMA-68 CPU Card: thanks to earlier work on the A10 CPU Card and thanks to Spectrum Digital, work on the schematics is progressing rapidly. With access to more powerful SoCs such as the OMAP5 and Exynos5 being definitely desirable but challenging at this early phase of the Rhombus Tech initiative, the AM3892 is powerful enough (SATA-II, up to 1600mhz DDR3 RAM, Gigabit Ethernet) to still take seriously even though it is a 1.2ghz ARM Cortex A8. With no AM3892 beagleboard clone available for sale, input is welcomed as to features people would like on the card. The key advantage of an AM3892 EOMA-68 CPU Card though: it's FSF Hardware-endorseable, opening up the possibility — at last — for the FSF to have an ARM-based tablet or smartbook to recommend. Preorders for the AM3892 CPU Card are open."
Link to Original Source
top

Rhombus Tech A10 EOMA-68 CPU Card schematics completed

lkcl lkcl writes  |  about 2 years ago

lkcl writes "Rhombus Tech's first CPU Card is nearing completion and availability: the schematics have been completed by Wits-Tech. Although it appears strange to be using a 1ghz Cortex A8 for the first CPU Card, not only is the mass-volume price of the A10 lower than other offerings; not only does the A10 classify as "good enough" (in combination with 1gb of RAM); but Allwinner Tech is one of the very rare China-based SoC companies willing to collaborate with Software (Libre) developers without an enforced (GPL-violating) NDA in place. Overall, it's the very first step in the right direction for collaboration between Software (Libre) developers and mass-volume PRC Factories. There will be more (faster, better) EOMA-68 CPU Cards: this one is just the first."
Link to Original Source
top

Google+ Identity Fraud

lkcl lkcl writes  |  about 2 years ago

lkcl writes "http://en.wikipedia.org/wiki/Nymwars outlines the problem with Google+ as an "identity" service, but nowhere does this page discuss any compelling down-sides for Google themselves. One is the risk of lawsuits where people *relied * on Google+, were lulled into a false sense of security by Google+, failed to follow standard well-established online internet identity precautions, and were defrauded as a *direct* result of Google's claims of "safety". Another is the legal cost of involvement in, and the burden of proof that would fall onto Google in identity-fraud-related cases of online stalking, internet date rape and murder. Can anyone think of some other serious disadvantages that would compel google to rethink its google+ identity policy? I would really like to use Google Hangouts, but I'll be damned if i'll use it under anything other than under my 25-year-established pseudonym, "lkcl". What's been your experience with applying for an "unreal" identity?"
Link to Original Source
top

Pyjamas pyjs.org Domain hijacked

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "The domain name for the pyjamas project, pyjs.org, was hijacked today by some of its users. The reasons: objections over the project leader's long-term goal to have pyjamas development be self-hosting (git browsing, wiki, bugtracker etc. all as Free Software Licensed pyjamas applications). Normally if there is disagreement, a Free Software Project is forked: a new name is chosen and the parting-of-the-ways is done if not amicably but at least publicly. Pyjamas however now appears to have made Free Software history by being the first project to have its domain actually hijacked. rather embarrassingly, in the middle of a publicly-announced release cycle. Has anything like this ever happened before?"
Link to Original Source
top

B2G's Store and Security Model

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "Boot to Gecko is a full and complete stand-alone Operating System that is to use Gecko as both its Window Manager and Applications UI. Primarily targetted at smartphones, security and the distribution of applications are both facing interesting challenges: scaling to mass-volume proportions (100 million+ units). The resources behind Google's app store (effectively unlimited cloud computing) are not necessarily guaranteed to be available to Telcos that wish to set up a B2G store. Although B2G began from Android, Mozilla's primary expertise in the development of Gecko and in the use of SSL is second to none. There is howevera risk that the B2G Team will rely solely on userspace security enforcement (in a single executable) and to try inappropriate use of CSP, Certificate pinning and other SSL techniques for app distribution, resulting in some quite harmful consequences that will impact B2G's viability. The question is, therefore: what security infrastructure surrounding the stores themselves as well as in the full B2G OS itself would actually be truly effective in the large-scale distribution of B2G applications, whilst also retaining flexibility and ease of development that would attract and retain app writers?"
Link to Original Source
top

EOMA-PCMCIA modular computer aiming for $15 and Fr

lkcl lkcl writes  |  more than 2 years ago

lkcl writes "An initiative by a CIC company Rhombus Tech aims to provide Software (Libre) Developers with a PCMCIA-sized modular computer that could end up in mass-volume products. The Reference Design mass-volume pricing guide from the SoC manufacturer, for a device with similar capability to the raspberrypi, is around $15: 40% less than the $25 rbpi but for a device with an ARM Cortex A8 CPU 3x times faster than the 700mhz ARM11 used in the rbpi. GPL Kernel source code is available. A page for community ideas for motherboard designs has also been created. The overall goal is to bring more mass-volume products to market which Software (Libre) Developers have actually been involved in, reversing the trend of endemic GPL violations surrounding ARM-based mass-produced hardware. The Preorder pledge registration is now open (account creation required)."
Link to Original Source
top

Where are the Ultra-efficient production Hybrid EV

lkcl lkcl writes  |  about 2 years ago

lkcl writes "Has anyone else wondered why ultra-efficient hybrid vehicles have to look like this, why the Twizy doesn't have doors as standard and has leased batteries, or why the Volkswagen XL1 does 313mpg but only seats 2 people and isn't yet in production? Why were both Toyota's RAV4-EV as well as GM's EV1 not just discontinued but destroyed? Against this background, what makes this 3-seat Hybrid EV design different, and what could make it successful? Although this article on hybridcar.com outlines the problem, the solution isn't clear-cut, so how can ultra-efficient affordable hybrids actually end up on the road?"
Link to Original Source
top

An accidental Free Software Accelerated 3D GPU

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "In evaluating the Xilinx Xilinx Zynq-7000 for use in a FSF Hardware-endorsed Laptop and possible OpenPandora v2.0, a series of Free Software projects were accidentally linked together — Gallium3D and LLVM 2.7's MicroBlaze FPGA Target. The combination is the startling possibility that the Xilinx Zynq-7000 may turn out to be the perfect platform for a Free Software 3D GPU, for use in Tablets, Laptops, and the OpenGraphics Project. entirely by accident."
Link to Original Source
top

RISC Notebooks: does 28nm make all the difference?

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option. The Toshiba AC100 and the HP/Compaq Airlife 100 are classic examples.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. The MIPS GS464V, designed by China's ICT, has such high SIMD Vector performance that it will be capable of 100fps 1080p at 1ghz on a single core, and has hardware assisted accelerated emulation of over 200 x86 instructions. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. And Gaisler have a SPARC-compatible core, the LEON4, which can be configured in anything up to 8 cores, and run at up to 1.5ghz in 30nm, giving an impressive 1.7DMIPS/Mhz performance per core that matches that of both the MIPS 1074k and the ARM Cortex A9 designs.

Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around an estimated $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s, bringing the price of an impressive desktop system easily down to $80 retail and a decent laptop to $150.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210, and for Nusmart's 2ghz 2816? Why are we not seeing any products with decent screens and memory from mainstream companies like Dell, IBM and HP, but are instead seeing a rash of low-performance low-quality GPL-violating Chinese-made Android-based knock-offs, touted as "web-ready", with webcams and microphones that don't even work?

What's it going to take for these alternative processors to hit mainstream? Do we really have to wait for 24nm or less, where it would be possible to run these RISC cores at ungodly 4ghz speeds or above, when 20,000 tiny RISC cores could fit on a single wafer resulting in prices of $4 to $5 per CPU? Or, with the rise of Android and GNU/Linux Operating Systems, would a lowly 28nm multi-core RISC-based System-on-a-Chip be enough for most peoples' needs?"

Link to Original Source
top

ARM or MIPS Notebooks: does 28nm make a difference

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "Predictions have been made for quite some time that ARM or MIPS notebooks and servers will be here. Failed prototypes date back over two years, with the Pegatron Netbook never finding a home; the $175 Next Surfer Pro being frantically withdrawn last week, the Lenovo Skylight being pulled weeks before it was to launch, and a rash of devices successfully making it to market with long-term unusable 1024x600 LCD panels and a maximum of 512mb RAM being the only real rare (and often expensive) option.

So the key question is: what, exactly is holding things back? With the MIPS 1074k architecture, a Quad-Core 1.5ghz CPU at 40nm would only consume 1.3 watts, and 28nm could easily exceed 2.0ghz and use 30% less power. A Dual-Core Cortex A9 consumes 0.5 watts at 800mhz and 1.9 watts at 2ghz: 28nm would mean a whopping 3ghz could potentially be achieved. Due to the incredibly small size, significantly-mass-volume SoC processors based around these cores could conceivably be around $12 for Quad-Core 28nm MIPS1074k and $15 for Dual-Core 28nm Cortex A9s.

So why, if this is what's possible, providing such fantastic performance at incredible prices, are we still seeing "demo" products like the OMAP4 TI Smartphone, are still waiting for the Samsung Enyxos 4210 and for Nusmart's 2ghz 2816?"

Link to Original Source
top

FreedomBox Foundation hits target in 5 days

lkcl lkcl writes  |  more than 3 years ago

lkcl writes "The FreedomBox Foundation hit its minimum target of $60,000 in just 5 days, thanks to KickStarter Pledges, and seeks further contributions to ensure that the Project is long-term viable. Curiously but crucially, the FreedomBox fund is for Software only, yet neither suitable low-cost $30 ARM or MIPS "plug computers", envisaged by Eben Moglen as the ideal target platform, nor mid-to-high-end ARM or MIPS low-cost developer-suitable laptops actually exist. What do slashdot readers envisage to be the way forward, here, given that the goals of the FreedomBox are so at odds with mass-market Corporate-driven hardware design decisions?"
Link to Original Source
top

Toshiba AC100 Linux 2.6.29 Kernel Source available

lkcl lkcl writes  |  more than 3 years ago

lkcl (517947) writes "Toshiba Digital Media Group, Japan, kindly responded to a request for all GPL source code and supplied it on CD. The kernel source has been uploaded to the arm-netbook alioth git repository (branch ac100/2.6.29/lkcl). The AC100 has already been hacked, rooted and sadly ubuntu'd as noted on debian-arm. Availability of the "official" kernel source should make getting WIFI etc. somewhat easier. Two key questions remain, though: why does such a fantastic machine with a top-end dual core ARM Cortex A9 CPU only come with 512mb of RAM, and why supply only the truly dreadful and unusable 1024x600 resolution LCD when it is known to be the cause of so many negative reviews?"
Link to Original Source
top

Open University Linux Course Irony

lkcl lkcl writes  |  more than 4 years ago

lkcl (517947) writes "A new Open University course, Linux T155 aims to teach the benefits of Linux and Free Software, including the philosophy and history as well as the practical benefits of being virus-free and being able to prolong the working life of hardware. Unfortunately, in a delicious piece of irony, potential Tutors who stand by Free Software principles and thus are best suited to apply for a teaching post must violate the very principles they are expected to instil, by filling in a Microsoft Word formatted application form. An article on the Advogato Free Software Advocacy site describes the ways in which changing the "accidental" policy of using Proprietary File formats has succeeded and where it has failed."
Link to Original Source
top

python converted to javascript: executed in-browse

lkcl lkcl writes  |  about 5 years ago

lkcl writes "Two independent projects Skulpt and Pyjamas are working to bring python to the web browser (and the javascript command-line) the hard way: as javascript. Skulpt already has a cool python prompt demo on its homepage; Pyjamas has a gwtcanvas demo port and a GChart 2.6 demo port. Using the 64-bit version of google v8 and PyV8, Pyjamas has just recently successfully run its python regression tests, converted to javascript, at the command-line. (Note: don't try any of the above SVG demos with FF2 or IE6: they will suck.)"
Link to Original Source
top

Python version of GWT spawns port of GChart

lkcl lkcl writes  |  more than 5 years ago

lkcl writes "GChart is a sophisticated graph and charting library, written in java for the popular Google Web Toolkit framework. Using a semi-automated java to python conversion tool a reasonably useable but not entirely bug-free version of GChart's 19,000 lines of code has been ported, in under three days, to the Pyjamas Desktop/Web Widget Set. Whilst development is primarily taking place using Pyjamas-Desktop, an online demo of the javascript compiled version can be seen here (note: reduce performance expectations accordingly, if using IE6 or FF2)."
Link to Original Source
top

Nerd Needs Help With Webkit Python Fiasco

lkcl lkcl writes  |  more than 5 years ago

lkcl writes "I'm in need of slashdot advice and help, as I recognise that I'm taking the wrong approach. Background: I'm a free software developer and, to put it charitably, "I don't get out much" (i.e. I don't see why people have difficulty with what I do). As a result, I've been banned from about five major free software projects mailing lists, and blamed for causing problems (but often I then see reports months later, of other people encountering the same thing with the same team!). I decided a year ago to put python on a par with javascript when it comes to DOM bindings, and have ported pyjamas to XULrunner, webkit, and recently IE's MSHTML. The trouble I'm having is with webkit (webkit doesn't have DOM python bindings, but IE and XULRunner do). At around 300 comments, we've got past the roaring bun-fight stage, and just got to the point where things were finally moving along, when one of the webkit maintainers decided to engineer an excuse to disable my bugs.webkit.org account. Ordinarily, I'd leave this alone, but I feel that this complex project — of making python truly the equal of javascript when it comes to web application development — is too important to just let it go. So — seriously: I'm not messing about, here; I'm not looking for an excuse to whinge; I truly need some advice and help because i am absolutely not going to quit on this one. What do you feel needs to be done, to get Webkit its free software python bindings?"

Journals

lkcl has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?