×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Why Screen Lockers On X11 Cannot Be Secure

Todd Knarr Re:not the point (354 comments)

You download a program that appears legit (and may be mostly legit, or be a hacked version of a legit program), and are running it.

But why would I do that? Almost all the programs I use come from the repository, and to get me to download one they'd have to compromise the repository first (which is possible, but not nearly as easy as just advertising a program for download). The rest are again ones I download from known sources, usually the developers' own official site, and again it's not trivial to compromise those sites.

The situation you propose only happens in the world of Windows where downloading random software from untrusted/unknown sources is routine. And if you're routinely doing that, you've got more problems than just a way to bypass the screen lock. The best way to avoid shooting yourself in the foot is to not blithely follow instructions but to stop and ask "Wait a minute, why are they asking me to aim a loaded gun at my foot and pull the trigger?". And if after pondering that question you still think following the instructions is a good idea, please report to HR for reassignment as reactor shielding.

yesterday
top

Pope Francis: There Are Limits To Freedom of Expression

Todd Knarr Re:He's Not Justifying Retribution (894 comments)

Sure, if someone curses his mother, they shouldn't be surprised if he slugs them. However, note that if the police get involved it would be the Pope going to jail and being charged with battery, not the person who cursed his mother. You may be expected to have enough self-control not to curse like that, but you're also expected to have enough self-control not to respond to ordinary words with physical violence.

about two weeks ago
top

Education Debate: Which Is More Important - Grit, Or Intelligence?

Todd Knarr Both are correct (249 comments)

The way "intelligence" is used falls more under the heading of what I'd call "the skills you have". Some are innate physical abilities, many are probably learned but we don't really know when or how so they end up just being things that naturally come easy to you. They're the hand you're dealt. Grit and persistence are useful then in making the most of the skills you have, practicing and refining them to get the most out of the hand you're dealt. Both are needed. We all know people who just don't get math, or have bad hand-eye coordination, or other things they're just bad at that pretty much preclude them being theoretical physicists or world-class tennis players and so on, no matter how much they might work at it. All the grit in the world won't help much if you're focusing on something you're just bad at. We also all know people who're very good at something and have the potential to be very successful in some fields, except that they won't put in any effort they don't absolutely have to and so they never become successful. All the potential skill in the world won't magically make you good if you don't apply yourself. The key, of course, is to apply grit and persistence to the things you're good at and the things you absolutely need rather than at things you're bad at.

about two weeks ago
top

HTTP/2 - the IETF Is Phoning It In

Todd Knarr Re:HTTP/1.1 is just fine (161 comments)

It's not just a matter of decoding the packets. The big problem is usually in separating out the packets for the connections from one specific client while ignoring the packets for all the other clients, and then assembling those packets into a coherent order so you can see individual requests and responses rather than just packets. That's fairly easy to do at each endpoint, much harder to do when just sniffing traffic in the middle. And of course the code to decode packets and assemble them into a transaction's more complex than the code to just append to a string and output that string to a file. Not everything has that kind of logging already built into it, and when I need to add it I'm usually pressed for time because it's a critical problem. tcpdump or wireshark will work, given enough effort, but I've too often seen them produce valid but deceptive results because while the filtering and selection and reporting were correct enough to look reasonable they weren't quite completely correct so the results were showing me something that didn't exactly match reality. Debugging dumps finally revealed the discrepancy, and we got the problems solved.

about three weeks ago
top

HTTP/2 - the IETF Is Phoning It In

Todd Knarr Re:HTTP/1.1 is just fine (161 comments)

Most of the bandwidth for modern web sites goes to content, not the HTTP headers. That's even with content compression, which is already part of HTTP/1.1. Reducing overhead by going to binary in the headers isn't going to reduce the bandwidth requirements by enough to notice, and comes at the cost of not being able to use very simple tools to do diagnosis and debugging (I've lost count of the number of times I was able to use telnet or openssl and copy-and-paste to show exactly what the problem with a server response was and demonstrate conclusively that we hadn't misformed the request nor botched parsing the response, having to use tools to encode and decode things would've led to the vendor questioning whether our tools were working right and then I'd've had to figure out how to prove the tools weren't misbehaving (telnet and openssl were widely-enough used that that wasn't a problem)).

about three weeks ago
top

HTTP/2 - the IETF Is Phoning It In

Todd Knarr Re:HTTP/1.1 is just fine (161 comments)

Because none of that requires a new protocol? You can do that in HTTP/1.0, it's entirely a matter of client programming. And yes a protocol analyzer can decode a binary protocol for you, but it takes a bit of work to set them up to display one and only one request stream. A text-based protocol, meanwhile, can be dumped trivially at either end just by dumping the raw data to the console or a log file. Decoding and formatting a binary protocol takes quite a bit more code and adds work. As for bandwidth, the HTTP headers are a trivial amount of data compared to the content on modern web sites so gains from compressing the protocol headers are going to be minimal (content compression already exists in HTTP/1.1 and there's going to be little or no improvement there in the new protocol).

about three weeks ago
top

UK Government Department Still Runs VME Operating System Installed In 1974

Todd Knarr Re:It is called good coding. (189 comments)

They have. But they didn't do it overnight, they did it small bits at a time and those 40-year-old systems were patched or updated and debugged with each change. The result is a twisted nightmare of code that works but nobody really understands why and how anymore. And the documentation on the requirements changes is woefully incomplete because much of it's been lost over the years (or was never created because it was an emergency change at the last minute and everybody knew what the change was supposed to be, and afterwards there were too many new projects to allow going back and documenting things properly) or inaccurate because of changes during implementation that weren't reflected in updated documentation. As long as you just have to make minor changes to the system, you can keep maintaining the old code without too much trouble. Your programmers hate it, but they can make things work. Recreating the functionality, OTOH, is an almost impossible task due to the nigh-impossibility of writing a complete set of requirements and specifications. Usually the final fatal blow is that management doesn't grasp just how big the problem really is, they mistakenly believe all this stuff is documented clearly somewhere and it's just a matter of implementing it.

about three weeks ago
top

Is Kitkat Killing Lollipop Uptake?

Todd Knarr Not a good comparison (437 comments)

I don't think the comparison holds up well, because in the case of XP users had control of the upgrade while in the case of phones it's usually the handset maker and to a lesser extent the carrier in charge. Adoption of Lollipop is mainly a function of how many handset models ship with it installed and how quickly people are upgrading to newer models of phones. Most of the flagship models are shipping with some flavor of 4.2 or 4.4 on them, and enough people seem to have bought those models in the last year that it'll probably be summer at the earliest before we see the next cycle of upgrades start in earnest. The only way we'll see Lollipop uptake pick up faster than that is if Google manages to convince the handset makers to roll 5.0 out to phones like the Galaxy S4. It'd also help if carriers stopped insisting on different "models" where the difference is strictly in branding and the actual phone hardware is identical.

about three weeks ago
top

Netflix Cracks Down On VPN and Proxy "Pirates"

Todd Knarr Re:DNS blocking failure (437 comments)

Harder and "tech savvy"? Hardly. If you're running a router based on DD-WRT (which is basically any home WiFi router these days), it already includes PPTP and OpenVPN servers. Doesn't take much on Windows to create a little script that'll do a one-click push of the necessary files to configure and enable the server and set up the firewall to allow VPN traffic to go to the WAN side as well as the LAN. Worst case is you go to your local geek and have them flash stock DD-WRT onto the router to replace the factory-modified installation (which I'd recommend anyway, the stock images are more stable and less prone to wonkiness).

about three weeks ago
top

Netflix Cracks Down On VPN and Proxy "Pirates"

Todd Knarr DNS blocking failure (437 comments)

Apparently the media companies haven't heard of this new-fangled device called a "router". It comes with this exotic, difficult-to-use feature called a "firewall". And it insures that regardless of what DNS servers the application may try to use, it will use my DNS server while on my network. Problem solved.

As for VPNs, it's difficult to block router-based VPN tunnels since there's no trace on the device that a VPN's in use. All it takes is a suitable server to connect to, and I've got a selection available that aren't part of any VPN service since I set them up myself. Setting it up the first time's a bit tricky, but duplicating that first setup and changing a few address numbers to match the new system's pretty simple.

The media companies need to just grow up and accept that the world's moved on with or without them, and that their problems stem not from any overwhelming desire of consumers to pirate content but from their own adamant refusal to accept consumers' money for that content.

about a month ago
top

Apple Faces Class Action Lawsuit For Shrinking Storage Space In iOS 8

Todd Knarr Why is this an issue? (325 comments)

It's already assumed on desktops and laptops: saying it has a 500GB hard drive means it has a 500GB hard drive, not 500GB of free space after Windows and all the other software is installed. Saying it has 8GB of RAM means 8GB of RAM, not 8GB of memory free after device drivers and services and Windows and run-on-startup programs have loaded. So why on a phone or tablet should 16GB of storage not mean 16GB of storage, why is it supposed to mean 16GB free after the operating system and software is installed? It may be simply that phones and tablets have so much less storage compared to desktops, so people are more sensitive to how much is used by the pre-loaded software. The solution to that, though, is simply to either buy a model with enough storage or one with an SD card slot so you can add storage.

about a month ago
top

Would Twitter Make President Obama 'Follow' the Tea Party If the Price Is Right?

Todd Knarr Where's my cut (121 comments)

If Twitter is being paid to promote a brand through my appearing to follow it, they're having me act as a spokesman aren't they? If so, where's my fee? I think 10% of the gross that Twitter receives is fair.

about a month ago
top

Ask Slashdot: What Should We Do About the DDoS Problem?

Todd Knarr Solutions exist (312 comments)

  1. Ingress/egress filtering near the edges. Backbone providers obviously can't feasibly do this, but edge networks like consumer ISPs have a solid knowledge of what netblocks are downstream of each subscriber port and what netblocks should be originating traffic on their networks. Traffic coming up from each subscriber should be blocked if it doesn't have a source address in a block owned by that subscriber, outgoing traffic through the upstream ports should be blocked if it doesn't have a source address of a netblock that belongs on or downstream of the network, and incoming traffic through the upstream ports should be blocked if it doesn't have a destination address that belongs on or downstream of the network.
  2. Disconnection of infected systems. If a subscriber system is confirmed to be originating malicious traffic due to a malware infection, shut off the subscriber's connection until they contact the ISP and clean up the infection. Time and time again it's demonstrated that the people getting repeatedly infected won't do anything as long as their connection appears to still work, and that the only thing that gets their attention is connectivity going out. Get their attention and make it clear to them that letting this continue is just not acceptable.
  3. Extend this as far into the Internet as is feasible. Even if you have so much interchange traffic that you can't filter all ports, you may also have some ports where there's a manageable number of known netblocks handled through them and you can do filtering on those ports to reinforce the filtering that should be happening on the connected network.

about a month ago
top

War Tech the US, Russia, China and India All Want: Hypersonic Weapons

Todd Knarr Simple: the consequences if they don't (290 comments)

Yes, it can lead to an arms race. The problem is that if you hold off and your enemy doesn't, you're a sitting duck. Avoiding the arms race is only possible if everybody involved holds off, and you don't/can't trust any of them to hold off so you have to proceed as if you're already involved in an arms race whether you want to be or not. Because the only thing worse than being in a Mexican standoff is being the one guy in a Mexican standoff without any guns.

about a month ago
top

What's the Future of Corporate IT and ITSM? (Video)

Todd Knarr It mostly won't change anything (50 comments)

With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT?

It won't. Corporate IT and how it operates is driven by the people who sign the checks. That, BTW, is not the employees. The people who do have considerations other than employee expectations in mind when they decide on policies, and some of those things like compliance with laws and regulations aren't optional. Corporate IT will, as always, continue to be bound by what upper management decides on and the rest of the company will have to live with upper management's decisions. And no, IT isn't any happier about this than the rest of the company, because frankly their job would be a lot easier if upper management would stop telling them how to do things and just let them do whatever they needed to do to deliver what upper management needed. I don't see that happening any time soon.

about a month ago
top

Docker Image Insecurity

Todd Knarr Re:Read the update (73 comments)

Upstream verification won't help. The client has to verify that the image it received is the same one the server verified, otherwise someone can hack a router to silently redirect the client to a malicious server and serve up whatever image they want alongside a copy of the signed manifest for the official image and you're fsckd. What they need is:

  1. The manifest has to be signed.
  2. The manifest has to contain a secure checksum (cryptographic hash) of the official image the server has.
  3. The client has to verify the signature of the manifest to confirm that the manifest hasn't been altered and comes from the official source.
  4. The client has to verify that the checksum of the image it received matches the checksum for the image in the manifest.
  5. Step 4 is apparently what's missing from the client.

about a month ago
top

North Korea Denies Responsibility for Sony Attack, Warns Against Retaliation

Todd Knarr I doubt it was North Korea (236 comments)

For one thing, if North Korea was capable of this sort of hack they've got more tempting targets to use that capability on. And it's just a bit too convenient, coming on the heels of a disappointing performance by Sony, for SPE to suddenly get an excuse to get out from under another apparent flop. My bet is the hack's just another in a long string of breaches by the usual gangs of malcontents, aided and abetted by corporate obliviousness to security, and various parties are just taking advantage of superficial connections for their own reasons.

about a month ago
top

Staples: Breach May Have Affected 1.16 Million Customers' Cards

Todd Knarr Re:Network Level (97 comments)

There should be more isolation, yep. When I handled POS the terminals had no local storage at all, they were network booted from images on the site server and the LAN they were on had no outside access at all. The site servers were on our own wide-area network that connected them to corporate, and there were only two network segments (Development and Support) that could connect to the site servers (sites couldn't even connect to each other). Access to the Dev and Support networks from the rest of the company was highly restricted, and any unexpected access from Dev or Support netted you a phone call and/or an in-person visit from the support manager to find out what had blown up.

I can think of ways to get malware out to the POS system through all that, but all of them involve physically being in the basement of the corporate headquarters where the Support and Development department offices were located and any unknown face would've had to avoid 2 managers and 3 secretaries before being grabbed by the scruff of the neck by Cory and hustled back upstairs (because if Cory didn't recognize you you were not supposed to be down there).

about a month ago
top

The GPLv2 Goes To Court

Todd Knarr Points at the end of the article (173 comments)

I'd note that the 3 points at the end of the article aren't unique to open-source software but apply to all third-party software you use in building your software. And those points are harder to address for proprietary third-party software than for open-source, because any software component may contain other components you aren't directly aware of and without the source code it's a lot harder to scan proprietary libraries to detect those included components (and it may be impossible if the included components are themselves proprietary because the people who wrote the scanner may not even know those components exist let alone have access to their code to create the necessary detection routines). Or they may be easier to address, if your license for the proprietary libraries doesn't include a right to redistribute then the answers become very simple if rather limiting and any less-restrictive licenses for other components become irrelevant.

about a month and a half ago
top

Former iTunes Engineer Tells Court He Worked To Block Competitors

Todd Knarr Not incompatible (161 comments)

Apple argues, and Schultz agrees, that its intentions were to improve iTunes, not curb competition.

I'd note that the two alternatives aren't incompatible. It's entirely possible to intend to improve iTunes while also determining that the best way to improve it is to block all competitors from accessing it (doing that would, among other things, eliminate bugs due to incorrect accesses and malformed music files and remove an inconsistent user experience due to badly-written software from other vendors). After all, when AT&T was banning all other vendors from connecting equipment to it's phone network it was only intending to protect the network from damage due to incorrectly-designed equipment (or at least so it's testimony went). In neither case do intentions alter the end result.

about a month and a half ago

Submissions

Todd Knarr hasn't submitted any stories.

Journals

Todd Knarr has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?