Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Ask Slashdot: Dealing With Electronics-Induced Inattentiveness?

OdinOdin_ Re:Change your state of mind (312 comments)

I'm not a practitioner or teacher, only been to a couple of sessions and I've heard about "fighting in slow motion".

about 1 month ago

UK Announces Hybrid Work/Study Undergraduate Program To Fill Digital Gap

OdinOdin_ Re:Could be a good idea.. (110 comments)

My favourite data structure is: DNA
How would I implement it? Huh!

about 2 months ago

Ask Slashdot: Why Can't Google Block Spam In Gmail?

OdinOdin_ Re:Former Google Engineer - my internal perspectiv (265 comments)

You don't have to present a certificate to the server?

You can initiate SSL/TLS where by the only party presenting a certificate is the server to the client.

Do you think that all HTTPS clients present a certificate to the HTTPS server ? This is not how HTTPS usually works, only rare systems that are using client side SSL certificate for authentication use it. But your standard credit card transaction or login portal does not present any certificate to the server.

With STARTTLS sending you start unencrypted, enable TLS via STARTTLS command, then perform some kind of authentication inside the secure TLS channel (this can be plaintext authentication inside TLS). Now you proceed to use the SMTP have both setup a secure channel and authenticated.

about 3 months ago

Ask Slashdot: Why Can't Google Block Spam In Gmail?

OdinOdin_ Re:That's the WRONG way to do it (265 comments)

Yes you are correct.

The problem is simple to fix, make it cost them CPU computation time.
Implement an SMTP Client <> SMTP Server cookie system, where by an adhoc association can be established between two systems, that the client can represent an arbitrary token to help build trust and reputation around it (or simply use IP address or SSL certificate hash).
Next define a mathematical problem that is cheap (in CPU cost) to setup and verify, but hard for the SMTP client to compute, forcing it to brute force the problem (this making the client pay the greater CPU cost). This needs to scale both linear and exponential.
Allow the server to define the problem to solve and the scale of the challenge, this more trusted clients have a cheap problem, brand new clients get hit with a harder CPU problem.
Built it all into the SMTP protocol.
Now the server is in complete control of the cost a particular client must pay to send the message, the client can decide to accept the cost or bounce the message.

Now sending from a ADSL link, from a foreign country or from a well known virtual host provider can all be scaled accordingly until the point SPAM becomes too expensive to rent enough server capacity.

about 3 months ago

Ebola Vaccine Trials Forcing Tough Choices

OdinOdin_ Re:funny that.... (178 comments)

Yes there are always vaccines for everything "in development" this is called research.

The opportunity of making a news story at just the moment someone in the US was confirmed as having the strain, is more a marketing ploy to ensure the company with the goods is getting attention and their phone ringing. Better to have your phone ringing offering you government money than your competitors phone ringing because some government official happens to know someone in that industry.

Is the vaccine production ready for the general population, hell no!

Did I tell you I have a perpetual energy machine that is "in development" ?

about 4 months ago

US Says It Can Hack Foreign Servers Without Warrants

OdinOdin_ Re:Color Me Surprised (335 comments)

> It takes year, my friend.

Did you see "Attack of the clones" ... at some point in the future these MakerBot replicators kits will be capable of building domestic drones carrying payloads, its just a matter of time.

No need to persuade many people of your tyrannical view point over many generations to build that army.

about 4 months ago

Belkin Router Owners Suffering Massive Outages

OdinOdin_ Re:In retrospect (191 comments)

ah yes, de Morgan's law.

about 4 months ago

High Frequency Trading and Finance's Race To Irrelevance

OdinOdin_ Re:Technological solution (382 comments)

Heh, because the stock they sell immediately of one of the other 999,999 stocks they hold in the same entity. Those stocks already had their 1 minute minimum holding period expire a long time ago.

This is also why it is funny when people say that pension funds hold their stock for a long term view.

But what can happen is the two pension funds collude to exchange assets with each other (zero sum game) over a period of time, so the fee levied for any transfers can be taken by all the snouts in the transaction cost trough. Yes if you stand back and look at the week to week they look like their are holding their positions a for the long game, but actually they found a way to extract additional profits to pay pension pot "fund managers" their bonuses.

about 8 months ago

PHK: HTTP 2.0 Should Be Scrapped

OdinOdin_ Re:Encryption (220 comments)

Because the point of the compression is to compress the Content Body Payload transparently (and potentially the HTTP header names and keywords) at the TLS streaming level.

It only makes compression useless for the "Cookie" header which is exactly what is needed to defeat CRIME.
All security sensitive data like this should be able to be trivially fuzzed. Maybe a better scheme would be to implement:

Fuzz-XOR-Key: 0123456789abcdefxyz/+===
Fuzz-Cookie: $version=1; $foobar="123"; $random-nonce-1="192jsk232"; SESSIONID="0123456789secretXOREDresultHER"; $random-nonce-99="982kmn323"; $fuzzed="SESSIONID";

NOTE: Its been a while since I looked at a Cookie header directly, there are probably some major syntax mistakes in the above example.

Now you can extend it to any other kind of header using a common key and transformation technique, by prefixing headers with Fuzz-* and writing an RFC/IETF document on how the key is applied to which parts and when of the header value data.

Your suggestion of disabling compression in SSL/TLS support is already implemented.

about 8 months ago

Hundreds of Cities Wired With Fiber, But Telecom Lobbying Keeps It Unusable

OdinOdin_ Re:Annoying. (347 comments)

Very similar to how it works in the UK.

A business called "BT Wholesale / aka OpenReach" operates as a corporate entity in its own right, that the government regulates. They more of less have last mile monopoly over the old British Telecom (which used to be the incumbent single telephone operator that was originally a public entity). So this was made private maybe 20 years ago but with certain caveats.

Such as a uniform pricing policy to all other telecom operators wishing to buy their wholesale services. Think like FRAND, as opposed to scheming and back office deals to maintain pricing.

Such as not offering the full package, i.e. only offering wholesale services. A regular home or business consumer never buys directly anything from the wholesale division. The end customer buys from the many (more than 500 in our little island) brand names, who in turn pay the wholesale rental fees out of your subscription.

Such as allowing politicians to have influence (through regulation) over certain aspects of governance. This is a good thing when there is a last mile monopoly, there is at least some kind of elected accountability. Especially when the government paid for the original construction of the network.

There is of course a parallel cable network now, that also have their own independent last mile. So in almost all urban/suburban locations another option exists, but BTs copper POTS network has a much higher coverage.

There also exists some areas (such as Kingston and Hull) which ended up with their own last mile services that operate their own telecoms independently.

Here in the UK now (with BT wholesale) the whole country is getting more street side cabinets (to within of 100 meters of every urban and suburban location) and fibre optics installed to those cabinets back to the local exchange site. The last 100 meters is still largely delivered over copper but at speeds around 80MBit/20Mbit, but I'm sure further speed increases will take places like ADSL/ADSL2/ADSL2+ in the future. This national roll out is over half way through and I'm sure within the next 3 years the original plan will be complete.

There are still issues with many rural locations being on dialup quality, hopefully as cellular like technology improves this could be utilized as back haul for rural locations. Rural in the UK might mean just being 8 miles out of town.

about 8 months ago

GnuTLS Flaw Leaves Many Linux Users Open To Attacks

OdinOdin_ Re:Who uses GnuTLS? (127 comments)

But no major server software that another party can connect remotely to exploit.

about 8 months ago

PHK: HTTP 2.0 Should Be Scrapped

OdinOdin_ Re:Encryption (220 comments)

What is the problem with the CRIME attack and header compression ?

Just add an XOR string to the Cookie header, that is applied against the other data fields. This XOR itself can change each time a Cookie header is emitted. Now you have a non-repeating, pseudo random input for the compressor to work on. But the other party can apply a transform to the Cookie header to get back original data.

For good measure also add an additional Random-Nonce-Field-1="random-length-data" which is simply ignored and discarded by the other end. Now you can perterb the compressor in both directions, by applying a completely useless to the attacher the same data (allow it to compress) but also a Random-Nonce-Field-2 which might be different for each header, like the XOR but completely useless and to be ignored data.

Now it is upto the researches to use these tools (added to a Cookie spec change) to come up the most CPU cost efficient way to utilize them to make CRIME and other such attacks not viable.

Or maybe I am missing something glaring here ?

about 7 months ago

Official MPG Figures Unrealistic, Says UK Auto Magazine

OdinOdin_ Re:Which is why sometimes small engines ... (238 comments)

Heh, except for the matter that if they don't comply with whatever regulations come out they don't get approval.

While retrospective regulation on old cars is a political matter (pissed of public force to comply) the regulations affecting brand new cars is not. This is a matter for the auto industry to solve and the public don't care.

Retrospective regulations changes (that affect a significant number of the population) are rare events.

Plus there is that small matter of driving on the correct side of the road (that we do). So this does influence the which hand drive the car is.

about 7 months ago

McAfee Grabbed Data Without Paying, Says Open Source Vulnerability Database

OdinOdin_ Re:I considered doing the same myself (139 comments)


Neither suggests access was explicitly or implicitly DENIED to third parties.
All someone else was doing was taking a photo of you.

Oh you have a Terms & Conditions document in your back pocket do you!

robots.txt is great and all, but someone did actually sit there pressing a button for each website hit, the button generated a random number and this number was used to randomize the delay and User-Agent data. It was under 2500 hits after all, sheesh I can hit ebay that many times just by browsing their catalogue for an hour.

about 9 months ago

Sony Tape Storage Breakthrough Could Bring Us 185 TB Cartridges

OdinOdin_ Re: But is it even usable? (208 comments)

Hmm but you should be having your RAID system perform:

* verification (checking a read to all copies and CRC validates correctly as expected)

* scrubbing (writing some other random patterns to each block of the disk, to confirm the disk is in good order, and will take new data, it also re-energises the disk, the original data is then written back into the block, and then verified, before it moves on to another part of the disk, this operation often requires battery backed memory, since the original data is preserved robustly this way over unwanted power outage).

Ideally you should verify (the whole storage) at least once per week, and scrub (the whole storage) once per month. These operations with hardware cards can be performed slowly in the background, but often a few hours a day during offpeak will do the job.

Doing this alone can extend the life of disks, compared to writing some block of data, no accessing it for 5 years, then wondering why in 5 years time the block is no corrupted.

Both these operations provide a better health check of RAID than SMART along, since SMART only knows of a problem after it saw a problem, and that often requires you to access the problem area of disk. This is what verification/scrubbing does on your behalf continuously over a week.

about 9 months ago

Intel and SGI Test Full-Immersion Cooling For Servers

OdinOdin_ Re:I wonder... (102 comments)

Yes boiling at this temperature is useful. Makes it easier to separate the hot from the cold, the equipment can be immersed in the liquid form, heat it up and it automatically separates the part that needs cooling and re-condensing.

Transporting the hot part becomes easy, the system has a natural pump cycling the atoms around driven off the heat. So all that heat energy is more usefully absorbed by the system (into kinetic energy), you are not putting additional energy in (such as a liquid pump) which also requires cooling itself.

about 9 months ago

Heartbleed Coder: Bug In OpenSSL Was an Honest Mistake

OdinOdin_ Re:code review idea (447 comments)

Would love to help maintain it, but committers are too busy.

First they need to switch to git as the main tree (maybe they already did this since I last looked properly).
Second they need to setup gerrit code review and allow anyone and everyone to submit patches and review patches.
Third they need to setup some kind of unit testing and code coverage framework (I wrote a testing tool sslregress once to validate a change I made to fix a long standing API oversight, this tool 'sslregress' does provide a framework to be able to stress test network interaction between two SSL endpoints in ways you can not ordinarily test, it could easily be extended to send garbage data inside valid SSL/TLS records). But someone explain how this can actually make it into the code base ? Who do I need to f**k ?

From my point of view the OpenSSL maintainers are in their ivory tower and that is the way they like it. Maybe it helps keep their revenue streams up ? Since those committers are also part of the official support teams.

about 9 months ago

Heartbleed Coder: Bug In OpenSSL Was an Honest Mistake

OdinOdin_ Re:Not malicious but not honest? (447 comments)

It might contain a length due to cipher block padding ? Is the SSL/TLS record length guaranteed to have 1 byte granularity for all supported block cipher modes and methods?

It might contain the length to allow the protocol to be extended at a later date, by putting additional data after the heartbeat echo payload. Because version 1 of the feature included the length, the data that may exist afterwards can be specified in version 2 of the feature but version 1 systems can still interact as-if it was version 1.

My question is... why is the correct action to silently discard the record ? Surely a malformed heartbeat record should result in a TLS protocol error response, with no further inbound or outbound data process (except to flush the error advise record to the other end) and a closed connection ?

about 9 months ago

Heartbleed Coder: Bug In OpenSSL Was an Honest Mistake

OdinOdin_ Re:Not malicious but not honest? (447 comments)

Huh no.... the developer who put in the TLS Heartbeat support tested it by sending valid and well formed data.

To expose this bug you have to send a validly authenticated SSL3 record, but intentionally modify a length attributes that is inside it and designed to convey the length of the heartbeat payload data. This length might overrun the available length of data left inside the SSL3 record. The failure was not performing this bounds checking, is the heartbeat payload length longer than the remainder of the available data left inside the SSL3 record. I presume a TLS heartbeat is not allowed to cross SSL3 records and the limit of an SSL3 record is 64Kb as mandated by the TLS protocol.

So no OpenSSL doesn't crash because when tested against itself the data was always well formed and valid.

about 9 months ago

NSA Allegedly Exploited Heartbleed

OdinOdin_ Re:Does the "fix" include scrubbing? (149 comments)

Ignoring the performance hit with this (that many application won't take)

Often the kernel pages are allocated in 4Kb chunks, this means maybe you covered to the end of the current 4Kb page (that is managed by OPENSSL's custom allocator).

But the system (libc) allocator is not overwriting released block, and the next 60+ Kb should be managed by this, which is not overwirting all released blocks.

about 9 months ago


OdinOdin_ hasn't submitted any stories.


OdinOdin_ has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?