Become a fan of Slashdot on Facebook


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment need to go somewhere (Score 2) 117

Ingested carbs need to go somewhere. Brain consumes a bit, and the remaining part is the problem. If one has enough physical activity, carbs get burned in muscles. Otherwise, they are converted into fat,

not quite.
carbs don't just stay here waiting (like gaz in a car's tank)
body will process them, depending on tons of hormonal messages.
carbs will get used and making fat is only one of the possibility the body will choose.

e.g.: if you do sports, not only will you burn carbs for energy during the sport, but you will raise the level of some growth hormone, encouraging your body to use the available resources to build more mudcles instead of storing them in long term.

remain in bloodstream (this is diabetes) until cleared by kidneys.

huh.. Nope. not at all.
diabetis is absolutely not "the excess sugar in the blood".
diabetes is mainly the signaling pathway that normally orders the uptake of the sugar being broken.
the two types of diabetis are due to which step of the pathway is broken .
(either the production of insulin, or the receptors that should.detect it)

the fact that people who overeat have an increased risk of diabetis isnt due to extra sugar staying in the blood, it's due to the body getting desensitized tobthe insuline (mainly because to avoid having extra sugar in the blood the body will secrete extra insulin, but over time that extra insulin will down regulate the receptors, leading to the oathway not working that well anymore) (also fat tissue also secrete it's own signaling hormones. obese patients have so much of fat, that they produce excessive amiunt of some hormone and their signaling disturbs other pathway)
so excess sugar isn't the cause of diabetes (and is actually correctly compensated at the beginning) it's the result of an insulin pathway that got fucked up, e.g. by the bad eating habits.

Comment GIT (Score 2) 159

Does the Git usage of SHA-1 *really* cause silent problems? I'm not sure how Git works internally but I was under the impression that it hashes whole objects, like individual source files at least.

The individual objects inside git aren't file.
The individual objects are commits (i.e..: the content of a patchfile, and a few information like pointer to other past commits to which this patch applies).
To make things easier, a handy number designates this commit - this is currently generated by SHA-1.

(Git is a content-addressable platform. You don't access object by name, you access them depending on their content. But instead of using the whole content to access them, you use addresses generated by SHA-1 to access the various blocks.
So to say which are the parent commits to which the patch in a commit applies, you just mention them by using the SHA-1 sum of the content of these commits).

A theoretical attack would be:
- try to generate 2 commits.
one adds a clean piece of code. the other adds a backdoored piece of code.
but both commits hash to the same SHA-1 so they would be considered as "the same content" by git.
Then try to force your target to re-download the whole repo from scratch from your backdoored history (otherwise git will simply ignore the commits with sha-1 sum that it already has - it thinks that it has the same content already).

In practice it's currently not doable.
The only thing that google managed to generate is a pair of block series. Each series contain completely random junk. Both series end-up generating the exact same shasum even if the random junk is different.
- That is exploitable in a PDF (or any other binary format that supports scripting. You could even do it in an EXE) : using the embed scripting present 2 different contents depending on which random junk is present.
- That is not exploitable in a sourcecode commit : you would need a believable explanation for why the random junk is present in the patched source code.
AND you would need a piece of code which reacts differently (normal vs. backdoor) depending on which random junk is present - to be able to pull that unnoticed would require "Underhanded C Contest"-level of ingenuity.

That's it, you only have blocks of random garbage.
Google currently can't produce hashes colliding from arbitrary pieces of data ("Hey google: here's is legit script A, and that's malicious script B. Add a small nonce at the end so they both end-up having the same sha-1sum") ("Actually don't add a nonce, that would be too conspicuous, try to tweak the punctuation in the comments instead")

Also as you mention, further edits will be problematic :
if I edit script A and submit a patch, this patch will be valid, but will completely fail on top of script B.

Comment SHA-1 in git and co (Score 1) 159

A cryptographic hash function has the properties you mention, plus the fact that it must not be easily reversible and uniformly distribute results over its entire output space.

The later is a property which is not guaranteed by most common checksums.
Thus, when you need a hash function to give a number to use as a handy "nickname" for a collection of data (e.g.: for a hash look-up table. Or for a content-addressable like git to create said addresses for a given content - and thus to give a serial number to a commit. Or apparently also used in SVN to give a simple number to designate commits), it might be a good choice to pick-up a cryptographic hash like SHA-1 because it guarantees you this additional property, which a vanilla checksum could lack.

Comment Uniformity (Score 1) 197

If you only care about random bit flips CRC32 will work very well and be much faster than MD5 or SHA-1.

Well, not exactly.
- MD5 and SHA-1 have fast hardware implementation on some CPUs. CRC32 won't necessarily be a huge performance gain.

SHA-1 is used a bit more than a simple glorified checksum in GIT.
It is also used to give a handy number by which you designates commits, etc.
(i.e.: to compute a hash - e.g.: as would also be used in a hash look-up table).
That requires good output uniformity.
In other words you'd need a hashing function that "spreads" its output accross the whole output domain.
(to give an over-simplified examples: if due to a poor design, all patches ended-up having hashes that begins with the hex number "9", that would be a poor hashing function for these needs. If you used it in a hash lookup table, one part of the table would be over filled, while other would be still empty)

Cryptographic hashing functions offer these guarantees among lots of others. CRC32 doesn't, and several of the other checksums that were quickly designed for speed have also been detected not to offer these.

At this situation, a programmer can choose two paths :
- Some coder would try design their own new hashing which offers both good speed and the important properties (e.g.: That's exactly what LZ4's Yann Collet did, and created xxHash64. It's not a cryptographic hash, but at least offers all the properties that cyann needed)
- Other would instead jump to a quick'n'dirty solution, and go for the major overkill: take a cryptographic hash (e.g.: And that's what Linus Torvalds did. He's a lazy git. He knows that a cryptographic hash would provide all the properties he needs. SHA-1 is one that was popular back then, had even some hardware implementation. So he picked it and didn't think much about it. It offers all the properties Linus needed for git. It also offered much more but Linus didn't give a fuck about that. Though it doesn't offer security (anymore. specially since the google proof of concept) but that's something that Linus doesn't care and didn't even bother to check (as mentioned SHA-1 was already suspected backthen, and serious cryptographic usage relied on SHA-2 instead), relying instead on signed repositories if security is needed).

Comment Not an assumption: actual non-India use. (Score 1) 45

And on exactly what basis do you make this assumption?

It's NOT an assumption. It's the personal anecdotal experience of what is running on my phone.
I'm not from India, I'm from Europe.
I have a Microsoft account and it's configured to use a time-based OTP as a second factor in the 2 factor authentication.
I don't have biometrics configured as a way to log-in.
I don't even have my biometrics data stored in the Aadhaar database.

I installed Skype Lite (note: like with Facebook's "Lite" applications, you need to side-load it manually, because inside the Google Playstore the app is geo-restricted and the store will refuse to install it on smartphone outside the target market).
App asks me my credential, app asks me my 2nd factor.
That's it, it works.

At no point in time did it ask me to upload my biometrics into the Aadhaar database, nor did it even consider using biometrics as an authentication factor.
So even if Aadhaar is apparently a leaky mess of privacy violations, I'm not concerned by it.
My fingerprints are not going to get pwnd and leaked to the net by Aadhaar as they don't have them.

So okay, I'm a single data point. Maybe there are other factors that I'm overlooking.
But still, my personnal anecdotal experience is a good sign that people outside of India could be using Skype without beeing affected by Aadhaar's "peculiar" relationship with personal information and privacy.
The only limitation that I've seen is the Playstore's own geo-locking, preventing non-Idan google accounts to deploy the app. (This can easily be circumvented by manually installing).
No practical limitation would prevent a US user, like the top poster, to use it in the USA (just like I did) and enjoy a skype client that only uses a fractions of the resources that the current mainstream android app uses.

The same is similarily valid with Facebook Lite and Facebook Messenger Lite.

Comment Less associates or more lawsuits ? (Score 1) 361

The big question I was trying to raise in my above post is :
- are young associate lawyers being made redundant by OCR and AI, to the point that they are fired and we see even less lawyers nowadays than before ?
or :
- are OCR and AI enabling the young associate lawyers to do much more work for the law firm (e.g.: now they can use google to search online through a large corpus of archive, instead of painstakingly going through microfiche in the basement of some government archive), so that the law firm can process even more lawsuits. To the point that we see even more and more lawsuits and other legal cases everywhere than there used to be in the past ?

My current impression of all the information I find online is that law suits and other legal proceedings are actually on the rise.
(e.g.: the several million of DMCA take-downs issued by the brazillian equivalent of **AA against an obscure "mp3toys" downloading website).
We're not seeing *less* laywers, we are seeing lawyers being more busy thanks to the modern computing tools.

Or in another field : Watson isn't putting medical doctors jobless on the street. Watson is helping process more of the simple stupid cases that could otherwise swamp a doctor's office. It helps doctors process even more patients cheaper than before thus bringing more affordable healthcare to the population.

Comment Also mine vs. others (Score 2) 361

Also, the survey taker will be more concerned about others' jobs (i.e.: jobs in general), because they see the over-all advances in AI (e.g.: speech recognition in Siri, automatic image tagging in Facebook, automatic face recognition nearly everywhere) and think that in general term, AI is progressing and one day might replace them...

But when they think about they own job (i.e.: they think about a specific area where they have expertise) they have much more insights on the details (they know all the intricacies of their crafts.
They might even have seen and/or tested some automation solution) and have noticed that we aren't quite there yet.
(e.g.: though speech recognition has made advances, automatic transcription isn't perfect for anything but the most easy cases. Youtube automatic captions still need to be corrected by a human. etc.).
Might even notice that robots are going to augment rather than replace them - as mentioned by others in this thread (AI is currently helping with the research work in law. It's not replacing attorney. Instead it's enabling a law firm to do much more without needing to hire more interns and assistants).

So hence the "my job" vs. "others' jobs" fears.

In addition of "not being frightened 'once a week by a robot' " as mentionned,
they might know that due to the specifics they know about their job, it won't exactly mean overnight take over by bots within the coming month.

Comment Apps solutions (Score 1) 89

I switched to the "Lite" versions of Skype/Facebook/Messenger because they were designed for 2G networks in BRICS coutries.
(Thus they phone back less to the mothership. And subsequently wake up less often).

And as for the jailing : webos powered Pre phones did attempt a bit the jailing idea.
Given that modern kernels have even better isolation features (containers like LXC and Systemd nspan), that should be even easier.
(Having each container's network connected to different types of bridges, some of them disabled when you leave for the weekend and don't want our battery to die).

Comment Homophobia and suicide (Score 3, Insightful) 192

Awww, did someone call you a faggot? He's a meanie!

There's solid data showing that suicide rate is higher among bi- and homo- sexual youth (teens and young adult) than among their heterosexual peers.
This is believed to be strongly linked to the difficulty of feeling accepted. The more a young individual with an unorthodox sexuality and/or gender identity feels rejected by the surrounding society, the higher the risks of suicide.

Check again the summary, it was not a young internet shouting homophobic slurs at a senior officer, it was the other way around.
By keeping a climate were "being [homophobic slur]" is considered as a bad thing, that senior officer is actively contributing in a small part in the lack of self acceptance and higher suicide rates among non-heterosexual young people.

It's not about being ridiculously excessively nice to people so they feel special snowflake.
It's avoid to keep a general situation were young persons feel so much rejected by the society that suicide seems a better alternative.

Comment Aadhaar vs US (Score 1) 45

For those of us who would be happy using less bandwidth stateside,

What part of "also uses India's controversial Aadhaar biometric authentication" did you not understand?

And you, what part of stateside didn't you understand ?
US citizen (and in my case european) aren't very likely to have their biometrics database in an Indian government database.
Users can still log-in using normal Microsoft credentials (as far as I know) and completely ignore that microsoft offers to Indian the possibility to log using biometrics they stored into a database that leaks private informations all over the place.

Comment Bogus URLs (Score 1) 81

Yup it's basically that.

With the additional peculiarity that here, "" will spit a valid page with suggestions, no matter what you throw as a name afterward (even if "" isn't in their database, it would still give a list of not necessarily related download links).

So they are not exactly issuing DMCA about links that don't even exist (these URLs do not return 404s), only DMCA about links that are not in google database (random links that elict a random answer from the website).

The claim is borderline bogus because, as mentioned, the website return random non related download suggestions. So the website is not necessarily infringing on the DMCA submitter's IP. On the other hand, as the result page is random, Google can't prove that the submitter didn't get their IP showing by random chance on the result page on the precise occurence when they tested the URLs about which they decided to file the complain.

So currently Google is deciding to accept the submissions. But that could easily get changed in the future.

Comment Attention (Score 1) 132

not all humans are capable of staying focused on the ride while not involved in it

Hence some strategies of asking to keep the hands ready on the wheel (and other similar micro-involvements)

(And there is experience, coming from the world of train automation, that suggest that this works (a bit).
e.g.: TGV train operators are required by the system to periodically hold the thrust control wheel)

Also in my personal experience, you still remain involved in the driving :
- even if the adaptive cruise control is taking care of keeping distance with the car in front, you need to periodically adjust target speed depending on the limitations of the local part of the highway. And in a city settings you still need to react to traffic lights, stop signs, yield, etc.
- even if your car has a lane keeping system, you still need to initiate overtakes (even Tesla's Autopilot 's lane change isn't good enough to be done without supervision. The car's sonars have a very short range and might miss a car coming fast from far away in the target line) and over all handle the whole highway entry/exists, and city crossing.

and what is the point of that anyway?

the same as having a friend in the passenger seat also watching the road :
additional checks.

Machine are never distracted : the LIDAR, cams and radar are always on, their input constantly processed. The car's computer will never lose focus.
Computer excel at boring repetitive tasks. The car will always be ready to execute an emergency braking if there's a risk of collision.

So, compared to just a lone diver steering the car, an autopilot ("Level 2" in official parlance) in addition to the human watching is always better (redundancy against possible accident), even better if driver AND passenger watch the road in addition to the AI.

Comment Devil's advocate (Score 1) 129

Playing the devil's advocate

There was a time people believed combustion was "phlogiston" exiting the material;

Which isn't entirely wrong. It's just the same usual equations but with an arbitrary minus sign in front of the oxygen.

(Just as you could mathematically describe orbits with a complex bunch of circles, but using ellipses makes it way much simpler for everyone).

blood was generated and consumed in the body (not circulated);

(medieval dark-age medecine hardly qualifies as a science. more of a superstition.
christian middle-age somewhat focused on a very small subset of the knowledge (mainly Aristotle) available in antiquity that happen to play nicer with their religious believes.)

(Real notion of blood circulation can already be found in many other greek scientist and as far back as egyptian antiquity.
Middle age just settled on Aristotle body humors for an arbitrary reason)

the Sun revolved around the Earth;

and then Einstein came and declared that everything is relative and it's only a matter of referential.
(You can pretty much put whatever you want in "your center", all relativist equation remain valid).

All of these ideas were eventually discarded through a process that was not incremental, but revolutionary.

and which yet still build-up on several other smaller past discoveries to arrive at the big conclusion:

mice could be "created" by leaving some food and rags alone in a bucket in a barn for a few days, while fly maggots were "generated" in meat.

the disproving of which requires both preliminary advances in chemistry (e.g.: Le Chatelier - matter can't just pop up into existence) and general understanding of evolution (e.g.: Darwin - mice must come from other mices or at least ancestrors close to modern mices) and in turn has interesting implication in germ theory (Pasteur - bacteria can't just pop into existence, exactly as mice can't neither) and for medecin (Koch and the identification of agents causing diseases).

Around Maxwell`s days it was believed aether was needed for the propagation of electromagnetic waves

And yet Maxell didn't competely invent electro magnetism out of the blue. (again, e.g.: Volta for a much older contributor) Even the word Electrictiy come from old Greek "electron" =amber, i.e.: the thing that you need to rub with cloth to generate static electricity.
And in turn his models were perfected by Einstein, and then further into quantum physics (Heisenberg and co).

and the age of the Earth was under estimated because the radio active processes preventing a more rapid cooling down were unknown.

yet, some geologists did came to differing conclusion due to plate tectonics.
And you needed the advance by the Curies couple to then be able to advance calculation of the Earth thermal cooling. And isotope dating too.

Yup. Some steps are wider than others, but they still built upon all the knowledge that was accumulated up to this point and start as far back as when the first monkey-man lifted up his nose and started wondering about the stars in the sky instead of just thinking about where to get the next fruit.

Comment Depends on supervision (Score 1) 132

It's not a terribly difficult problem to get to work 99.5% of the time, but with lives at risk most people aren't too happy with that number.


If the system works even 90% of time and there's a human backup that is alert and focused, then it's good already.

(like autopilots found in airplanes, boats, some modern high-speed train.
Autopilots help automating some minute detail of the driving/sailing/flying.
But autopilots are still under the supervision of a human in charge.
It just relieves the human of part of the stupid hard gruntwork.

That's also were Tesla's autopilot and Google's prototypes on highway fell in).

If the system works even 99.9% of time, and the human is asleep, that's an entirely different can of worm.
You need well established public awareness that the autonomous driver is better and cause far less accidents than the humans.

(The small scale slow driving google cars with no steering wheel fall in this category).

Slashdot Top Deals

You have mail.