The Security of Popular Programming Languages
A pretty comprehensive study was done that showed regardless of the language selected, the programs written by developers with the most experience in that language were the the ones with the least security bugs, regardless of traits of the language like attack surfaces or complexity.
It's short and sweet. A developer's experience with the specific language or framework an application is written in is a better indicator of application security than the language or framework an application is written in.
3D Display Uses Misted Water
I've seen the various design concepts before and they're all variations on the same intrinsically flawed theme; displays projected on either a liquid or gas that requires very still air, and a very irritating environmental system to manage, not to mention an image that is disrupted when a user 'interacts' with it because it's interrupting the 'canvas'.
I don't know of any scheme that could avoid these fundamental problems that will stop this from ever being a widely useful, much less consumer level technology.
I think we're just going to have to stick to visual overlays on 3D space, augmented reality style, at least until we can actually produce the sci-fi concept of projected holograms. Anything less is simply not useful enough.
A Bid To Take 3D Printing Mainstream
Theoretically you need to;
- Level the build plate and calibrate it
- Learn any 3D modeling software to create or modify objects, often at millimeter level precision
- Learn the slicing software which converts your 3D object file into a file your printer understands as instructions
That's it. Frankly, the second one is a huge investment of time and energy, and while some simple 3D design is possible in very stripped-down programs, nothing BUT simple design is possible in very stripped-down programs. Autodesk Inventor and others may be more complex than they need to be, but only for a fairly basic definition of 'need'. Many folks just rely on others models and skip step two entirely, and you can get by that way ... for a while.
A bigger problem to the consumer market is the practical issues. What a consumer needs is reliability, and a by-the-numbers process. Like an ink printer, when I send a document to it, I hit 'print' and I expect it to work.
It took a long time for printers and copiers to get to that point. Even now we have issues where printers need different settings for different paper, and we still have paper jams and ink smears, and the basic functionality of a printer is significantly less complex.
So we're not there yet though. As a replicative process, any minor error grows geometrically as the model progresses, and we don't have consumer-level devices that match the precision of the expensive commercial-sized printers. The following items all have a large impact on the success of your build, and all of them are intractably linked; print speed affects optimal rafting, which is impacted by the humidity, and so on and so on.
- Managing airflow, humidity, temp, and particulate matter (dust) around the device
- Rafting and supports to actually allow printing various shapes with undercuts and voids (which vary based on a number of things, not least of which is the actual model)
- Balancing heating and cooling; cooling causes contraction which results in curling especially when different parts of the build are at different temps at the same time.
- Print speed
- Print quality
- Printer head wear and tear
One of the tests of these "pro-sumer" 3D printers is to try to print the same object out 5 or 10 times, and count how many times it was successful with the same build instructions. 8/10 is really good. Usually, of course, you'll have to try 2 or 3 times just to get your first 'successful enough' print - these don't count, you're just dialing the numbers in for that model based on experience and guesswork for your specific printer.
What we're left with is this; All the made-for-your-mother, 'basic consumer' 3D printers are, and will be for the short foreseeable future, akin to the EZ-Bake-Oven. They sorta look like a real oven, and they can sort of cook food like a real oven, but you're not meant to try to use it as a real oven. Stick to the company approved recipes only, and even then, the quality will be low.
So, no, I don't think they're going anywhere with a consumer device at this point in time. Maybe in another 5-8 years we'll be ready for the first widely usable one, but it's a bit too early to crow about it just yet.
TSA Missed Boston Bomber Because His Name Was Misspelled In a Database
It's no longer making the news, but for a while it was a nearly-daily occurrence. It's just not a big media draw anymore, unless it impacts a politician or famous entertainer.
TSA Missed Boston Bomber Because His Name Was Misspelled In a Database
I've written about this before; I used to write financial software for a living, and one of the requirements for a US bank was to provide a mechanism to detect transactions by an unauthorized person.
In short, the govt. provides a list of bad people in a text file. One name per line, all upper case, like it came out of an old batch system. We then check to see if the sender or receiver of any transaction /EXACTLY/ matches that string, case insensitive. If it's an exact letter-for-letter match, there's a flag that's set and the transaction is delayed, but it appears to go through as normal(*). What happens after that is the bank's responsibility, but that's the whole of the complexity.
Whoever made the list usually has a few variants of spelling; OSAMA BIN LADEN or OMASA BIN LADEN or OSMA BIN LADEN, for example. But that's it. Just spelling your name slightly differently is enough to avoid the flag. We're literally not allowed to add anything else, like soundex matching or handling foreign letters.
This is ~probably~ also how the TSA no fly list works, and why you still hear about false positives from time to time. It's also probably how any security works until it's been around for 20 years and they hire a contracting company to make them really good software that does what they want, instead of what they think they want it to do.
It just takes a very long time for software designed by a legislative committee with no technical awareness to morph into something usable, but that's government for you.
* - most transactions are not sent out until the end-of-day reconciliation anyway, so it looks like it's accepted like most other transactions, probably in a 'pending' state in your online balance - unless you're paying for a wire transfer or something.
Ask Slashdot: Online, Free Equivalent To a CompSci BS?
The simple fact of the matter is that a 4-year university's computer science program is not meant to provide job training, and as far as career skills go, you could pick up a CS degree equivalent of job skills in under a year.
I wrote about this the other day, on the Ask Slashdot: Modern Web Development Applied Associates Degree topic, and I'm sticking to my guns on it. You don't need any math more complex than simple algebra. You don't need any theory classes.
Some of these theory classes may provide better insight, and lacking them may limit you if you're attempting to enter a highly specialized, complex field with no demonstrable experience in it (which, by the by, doesn't really happen), but for 98% of your day job, it's going to be more important for you to know how to parse and sanitize input than it will be for you to know how to write a compiler, raytracer, decompose a function into mathematical terms, perform a Big-O analysis, design a memory manager for an OS, and you'll probably never use matrices or differential equations.
Hell, the grads I see now a days haven't got a concept of efficient design, most lack basic database skills, awareness of common libraries, common development tools, never used any team-based tracking systems or source control, and so on. Unless they've struck out on their own, they're almost completely unsuitable as candidates. Many of the self-taught devs seem to have a better grasp of things, if only because they end up attempting to write usable software from design to implementation, instead of homework assignments demonstrating polymorphism and recursion.
On the other hand, for many HR departments, a degree is go/no-go. You'll never get to an interview without one, and there's no free, online equivalent for that. You'll just have to make do with having superior technical skills, and try to apply at a company that values that more than a sheet of paper.
Ask Slashdot: Modern Web Development Applied Science Associates Degree?
This assumes 'web development' refers to web-based applications, not just informational webpages.
This is likely to be an unpopular opinion to many, but I don't see the huge barrier here.
I've been working as a software developer for nearly 20 years now, going from games programming to business apps to web development and machine learning. In that whole time, I can count only a small handful of times when I've ever had to exhibit mathematical skills more complex than trivial algebra. Oh sure, in college, they made me write my own compilers, I had to write my own vector math routines for my ray tracer, and so on, and I consider these valuable learning experiences. However, in the real world, where I'm employed and make money, I use software libraries for those sorts of things.
When it comes to data structures, the languages of employers today, java and c#, provide me with the majority of structures and optimized-enough algorithms to manipulate them. I don't have to do a big-O analysis and determine if my data patterns will be better served by a skip-list than a quicksort, because we just throw memory and cpu at that anyway!
The point is, if you spend 1-2 years learning to write software - not computer science theory - you'll be ready to enter the workforce. Sure, you're not going to be someone creating those frameworks, you're not going to be an architect, but you'll be able to use them. A few years of real world problems and google at your finger tips, and it's likely you'll have learned enough to start tackling those harder problems.
Here's a list of what I'd prioritize before computer science theory, in regards to employment:
- Proficient in SQL and at least one database type
- Familiar with IDEs, source control, bug/task trackers, automated builds and testing, debugging tools and techniques.
- Ability to work in a group software project.
- Exposure and participation in a full blow software development life cycle (SDLC) from reading, writing, evaluating requirements, coding, debugging, QA, unit testing, the oft-overlooked documentation, etc. Include at least something waterfall and something agile-ish.
I don't think I need to explain the value of any of these, and these practical concerns trump high level concepts like discrete mathematics or heuristic design for the entry-level developer.
The Science of Solitary Confinement
1. Remove a danger to society
2. Acting as a deterrent
3. As a punitive measure (strongly related to item #2)
4. To provide rehabilitation
To date, analysis has shown that never in the verifiable recorded history of crime and punishment, has any prison, anywhere, ever had a non-negligible impact on recidivism rates. Some pre-established percentage of people continue to commit crimes after a jail sentence, regardless of changes to enable rehabilitation. Education, trade skills, access to medicine & counselors, 'nice' quarters, access to games and exercise, work release programs, etc - no appreciable impact.
Even punishments like public shaming (very big in medieval times) have no impact on the average number of individuals willing to commit the crime again. Even torture (short of permanent harm) has no real lasting impact, though it does often result in the individuals using more effort to reduce the risks of getting caught.
In short, prisons do not rehabilitate prisoners, and they never have. 
Pretending they they do, or can and then making screeching noises when they fail - or worse, throwing money at them so they can try yet another fad get-lawful-quick program is just irrational. Blaming the system for not working as one expects only shows the value of those expectations.
Here's the takeaway: The only things prisons are good for is removing a danger from society and providing a punitive threat as a deterrent - and even that last one has only limited impact.
For those interested in constructive comments, the fix is obvious and simple; spend that money on fixing those parts of society that give rise to crime. Focus on education, focus on a two-parent household, focus on employable skills, and so on.
 - oy. Google it, read some books, and take a few criminal justice classes. Personally, I'd start with this book, http://www.amazon.com/CRIMINAL... because it's a fascinating read, but your mileage may vary.
 - though there's nothing to say they couldn't eventually. Maybe cryogenically freeze them and subliminally imprint upon them the desire to knit when they're stressed? Could work.
 - Technically, life in prison works, in that they don't commit any more crimes, but the important point to note is that rehabilitation programs STILL have no impact on this rate. So it doesn't count either.
Thief Debuts To Mediocre Reviews
Were you talking about the daily news and world events? The political history of most existent countries and almost assuredly the history of those that no longer exist?
Sure, there's less metal golems and tricksie lords, but what you're describing is how the world actually seems to work. You can't shelter kids from that, and if you do, the result will be an individual incapable of dealing with reality. It'd be like living on the "Small World" ride until the age of 12 and only then being released into the world. That's a hit to a psyche.
Star Trek Economics
In the timeline of a pre-post-scarcity world, we have a population of unemployed individuals which will grow as job growth - especially unskilled blue collar labor - flattens or becomes regressive. Until we're in a post-scarcity world, however, these individuals will be in a society that requires money for things like housing, food, shelter and clothing - whether it comes from the government or not.
At some point, the government simply won't be able to provide; their budget will be scraped too thinly over the nation. This is one of those situations where we'd be hard pressed to iteratively progress - it's a "flip the switch" sort of thing. Doing otherwise will create a massive underprivileged underclass, who are likely to be quite frustrated by their life; no job or job prospects, subsistence level living, inability to focus on personal goals or desires...
Two things can happen at this point:
Those who have focused their lives on acquiring wealth, the super rich, the 'haves', the ones who are most defined by the benefits wealth has brought them, they can all become completely selfless altruists, and together, agree to reduce their primary value to near zero by agreeing to, effectively, eliminate money in the spirit of pure socialism. Thus, utopia is achieved.
Alternatively, they will not do that, and at some tipping point - say, 60% unemployed - there will be a revolt that destroys the current economy, form of government, and so on, settings us back to 0 on the cultural progress - and likely technological/engineering scale, but removing the then-existing artificial constraint that says work=money.
I really don't see the first happening. Do you? Am I overlooking some important alternative choice?
In actuality, I think we're headed towards a more corporation-centric outcome, as predicted by many of the darker sci-fi novels out there, rather than a post-scarcity world, but hey, that's just my opinion.
How I Lost My Google Glass (and Regained Some Faith In Humanity)
It was repeated several times in the article; she was worried about 'media' on the device being posted to the internet. That it would be a 'voyeuristic invasion of privacy.'
You all realize what's being said here right? I don't think I'm speculating too much here; she took naughty photos and/or video with her glass. That's why she was so worried. Not the cost, nor her email (which she changed the password on after the fact), nor much of anything really, aside from the 'media on the device'. This wasn't her worrying about someone being critical of her lunch choices or the amount of mayonnaise she uses. She recorded some pants-off time and didn't want to be embarrassed.
At some point, society needs to get a little more aware of their own situation. Anything you record in digital media may very well be persisted indefinitely, and seen by others. This could be due to theft or cell phone hacking or an upset significant other. This goes for tweets, for emails, for forum postings, for photos, for video. If it's electronic media, now a days, you can bet the/a government has access to it if they want it at the very least.
There's a really easy way to avoid this though; learn to never record something you don't want other people seeing. It's not that hard. Alternatively, make your peace with it if you choose to do so anyway.
ICANN's Cozy Relationship With the US Must End, Says EU
So the problem seems to be that ICANN is an american corporation, and thus subject to the laws of the US, and that in turn, could be used against foreign powers?
The solution then is to 'globalize' it? Where is it going to be 'globalized' to? Which country could it exist in where it would have immunity to any laws and act with impunity in regards to them?
When I see the complaints against it by China, Russia, the EU, and so on, they're always advocating more restrictions, protection of their interests. They want the ability to blacklist sites that talk about their politicians, that discuss unfavorable religions or religious rights, that cover alternative lifestyles such as gay or transgender, and so on. They want to do it without arbitration, automatically.
What they really are complaining about is that they don't have absolute control over it, and they want it. Everything else is just a pleasant lie or deliberate misdirection.
Let's be fair; the US has more than it's fair share of faults, but our definition of freedom is still incredibly wide reaching compared with the vast majority of countries in the world, and we're big enough to make it hard to push us around with political power alone. That's the big problem they're seeing. ... besides, use of the current DNS registry system is entirely voluntary. There's nothing to stop someone from coming up with their own, like the TOR network did. If it's better, people will use it over the current one. Though, I think they realize that any replacement that is more strictly controlled will never be considered 'better', so they need to subvert the current one.
Ask Slashdot: Should Developers Fix Bugs They Cause On Their Own Time?
Let's assume the programmer is average. They write some good code, some not so good, they have a certain bug rate. We have to assume that even the best programmers introduce bugs here and there. If you assume that no bugs will be created, or will develop later on - you are not fit to be a manager. Bugs happen.
Knowing that, a good project manager is going to create a system with peer review, with automated and manual testing, both unit and functional, frequent project sanity checks and of course, reasonable timelines and room in the schedule for refactoring and teardowns, not to mention some amount of signoff from those who okay'ed the project and approved each step.
If they won't provide that - it's on them. If they can't, it's those above them, and so on. That's the way it works; those above you in the chain need to provide an environment in which to excel, if they expect excellent results.
The downside, of course, is that it costs time and money, even if you start with exceptional people - not just average ones.
Ask Slashdot: What Do You Do If You're Given a Broken Project?
Is it about the money? Is it about maintaining a professional relationship? Having a steady job? Completing a challenging assignment? Learning a new skill? Working on an app that will eventually be released as a finished product instead of a never-ending series of bugs or rolling feature updates from an agile process with no end or sense of accomplishment?
Figure out what you want out of it, and then take the steps to achieve that.
That aside, I personally don't place a lot of value in seniority for the sake of seniority. That someone 'respected' worked on it means nothing at all if the product is crap.
At one workplace, I acquired a project much like you did; our three architects had all worked on it personally, over a 10 month period. It took me 2 weeks to get it running on my own machine locally - so much had been hardcoded; pathnames, machines, pre-existing sql connections with expired logins on machines only accessible from within a cluster. It had unimaginable complexity, built so that they could 'throw it over the fence' to the ops team, and supposedly let them own it, and update it for when our software changed in the future. They would only need to learn java, sql, our internal table structure (undocumented and continually changing) and SSIS too. It didn't help that the software still didn't work yet. It'd run for 2 days and then drop a 40+ gig coredump.
Yeah, I complained, and complained, and everyone just said 'make it work', so I talked to the end users and product owners, collected requirements, and wrote the whole thing from scratch as a command-line tool in about 4 hours. I had to spend 2 days making a power point presentation to demonstrate how it was functionally superior (cpu, memory, bandwidth, throughput), easier to use (2 pages of documentation), well commented and structured, had no 3'd party dependencies (so no extra $$$ for database licenses and such), and how it followed the company statement and policy (one of which was explicit; 'Do not just "throw it over the fence"').
I got a lot of positive attention from that. If recognition is your thing, that may be the way to go.
When I eventually quit that job they remembered that I got stuff done, and done well. So now I work for them in my spare time, making 3x my previous salary, on discrete projects where I call the shots and they just need something that works well without dealing with months of crap in between. So, I eventually got money and responsibility too.
Of course, your results may vary.
The Moderately Enthusiastic Programmer
Who ends up writing these descriptions? The programmers? Their team lead? The architect? No. They just provide the job requirements.
It's your HR staff, your middle and upper management. They come up with corporate statements like "Engage our customers and employees with passionate, best of breed solutions and lead the mindshare" and that jumble of words has real meaning in their world. Now when they issue a statement, they're going to be asked things like "Does this grow our mindshare? Can you put a metric on the net 'passion' of this business decision?" This leaks through into their job descriptions among other places.
You've probably been exposed to this phenomenon before and come away confused; this world is about 90 degrees away from the norm - just enough suck you in with familiar words and phrases which only reinforces the alien nature when they're used to mean something totally other.
You ever get the question in an interview, "Where do you see yourself in 5 years?" - that comes from the same mindset.
Look at it from a business standpoint: If your company makes great sprockets, and you consistently make a million dollars in sprocket sales ... you've failed as a business. The metric isn't how much you make. It's not even how fast you make it. It's how fast you increase the rate you make it. So when they hire execs, they want them to say "in 5 years, I want to be the division manager of the newly created south pacific department, that I've built from the ground up" - or something. They don't want them to say "Valued for my professionalism, expertise and domain knowledge, doing the same job I've always done." That means they're somehow broken, that they just don't "get it."
You'll probably note that when they bring in non-administrators, those folks don't ask those questions unless they have no idea what they're doing hosting an interview. On the other hand, if the majority of your company is focused on high pressure sales - real estate, auto, etc - you'll be exposed to it more and more, even in support jobs like IT, and even from other engineers.
This is just one of those cases. If you're not in sales, marketing, middle-to-upper management, it's usually safe to ignore the parts of the job description that don't relate to your actual job. They likely have nothing to do with it.
Office Space: TV Documentary Looks At the Dreadful Open Office
I've worked with an industrial psychologist for quite a while now - they focus on things like pre-employment screening, improving employee efficiency, hiring (both from the company and candidate viewpoints) and so on.
One of the things they'll point out is that not every employee has the same motivations or same 'best' work environment. You're going to get some that thrive in an open environment, and others that don't. You'll get some that spend more time chatting, and others that use collaboration to become more productive. Unfortunately, there's no silver bullet to say which is best, and office layout is only a small part of that anyway.
However, you can do an employee survey (by which I mean an actual scientific survey with statistical analysis, not just a slopped together 'do you want open seating yes/no' form), and determine which environments work best for your best workers and average workers. This gives you the information you need to make a good decision. For example:
- Does it make sense to change the environment to make the average workers more efficient?
- Alternatively, should you change the environment to make your star workers most efficient and expect that the environment will help turn your average workers into stars (and weed out the underperformers)?
- Are your tasks inherently better suited to solo efforts or team efforts?
- Are your employees good communicators?
Of course, most of this is moot.
There are only a few cases where an immediate manager has the ability to radically restructure the work environment - those decisions are made higher up. At the same time, those higher up are making decisions primarily on immediate financial costs - so cubes and open offices are much more cost effective.
Personally, I'd rather have a small office with complete control of the light and temp, and don't have a chance that someone's looking over my shoulder.
Short Notice: LogMeIn To Discontinue Free Access
My problem is that I need a remote windows access solution that doesn't appear to have required me to go through extra effort to circumvent security restrictions.
While my department and local IT people could care less, and in fact actively enable me to do my job, corporate might have an issue.
We are ~technically~ not allowed to download software and install it (in fact, links to .zip, .exe and a host of other types are denied), and standard RDP, VNC and similar ports are explicitly blocked. Logmein handled everything via the browser and browser dialogs, so I could claim ignorance, and required no special network configuration.
Sorta sad to see it go. I liked having plausible deniability.
"Clinical Trials" For Programming Languages?
We all know innately know this, but sometimes it's hard to avoid getting caught up in a religious debate about the apples-to-oranges details.
Once your language is at least complex enough to write a compiler or interpreter for itself - that is, it's no longer a toy language - it tends to be more or less capable of everything every other language is capable of. Sure, it might not be the best tool for a given job, but they're all generally the same aside from some syntactic sugar.
The more important thing is the metrics we can track are directly correlated to a person's experience with a language. The longer an individual spends writing in a language, the less time it takes to complete a program, the less errors it contains, the severity or error incident goes down, the number of security issues is reduced, it is better optimized for the platform, uses less memory or cpu, has more features, etc.
Those are reliable statistics and the trend holds true regardless of the language. That has real world value far above and beyond arguing whether whitespace should be part of a language, or if it uses smalltalk or c++'s object models, to name a few items I've seen above.
The end result; whatever language you tend to use the most is going to be the best language for you to use, often even when it's not a good fit.
Do Non-Technical Managers Add Value?
You ever duck your head down, put the earphones on, and cut a swath through the feature list, barely realizing that you've missed lunch and it's already 7pm? You'd leave but you've just thought of a really elegant optimization routine and it's so obvious, but you need to see it work before you go?
A good manager can provide coordination between project members, act as an insulating buffer between customers/requirements and devs, fight for resources, push back against poor requests and push forward agendas like refactoring, internal tool development, or library updates (ie, the Good Fight). Really though, this boils down to the simple goal of letting the devs do their job.
Without all the other context switching, we're free to descend into code mode, shut out the outside world, and make beautiful code that we're proud of. In practical terms, that means less bugs, better security, efficient code, lower cost of maintenance, and so on. That's the biggest thing a manager can really provide; an environment where we're free to excel.
That doesn't require any sort of technical chops.
USB Sticks Used In Robbery of ATMs
I used to write financial software for a living, including ATM driving software.
I realized, after a while, that I had certain preconceived notions about the sort of software and hardware that is running on these sorts of high profile, high risk systems. Obviously, the software will have been made highly secure; redundant checks on every action, code signing, etc. It'd likely be running a custom operating system that was built from the ground up and booted off a (P)ROM. The case would be just as impenetrable, with a separate compartment for the computer itself, requiring specialty equipment so that could only really be opened at the point of origin or in a manner certain to destroy the innards - and certainly not in the field.
Right? I mean, any of us can think up a set of reasonably secure basic premises from which we could build a system like this out of.
Imagine my surprise when I found out that half of the ATMs out there are just running off the shelf windows desktops, with the original demo software still installed. There's no real optimization, no cleanup, no limited boot, nothing; it's just a desktop machine jammed in a vending machine with a custom card & cable for driving the mechanics of the ATM. Sometimes they're even in the original manufacturer's case (though usually they're just the board). I've also done some work on vending machines, and I can tell you that they're often better made!
As a software developer, one of the things I was shocked to see was that security for ATMs was almost entirely focused on the physical. There's little to stop someone from hooking up an external line and sending approvals or just do basic proxying - most of the data is sent in the clear, just skim it, or to update the system with a cd or usb if you pull the front cover of the ATM off. Many times, you'll find someone left a keyboard and mouse behind in the unit because it's a pain to always carry your own when doing updates or what have you.
This follows the same basic trend in the rest of the financial systems I've seen; physical security is very high, software security is relatively low. When it comes down to it, most companies place a focus on tracking transactions rather than securing them, and rely on constant manual review by staff to detect problems (that's why banks close so early - the folks who don't run the registers are in the back doing the day's reconciliation.