We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
ydrol (626558) writes "How do developers progress from fairly simple integration work — eg moving and manipulating data from one system to another in a typical corporate environment — to more challenging algorithm development. Over the years I've done the former, and the challenge was usually due to time , money or resources but the actual problem space was algorithmically simple — move data from here to there — reliably. How do people get out of that environment to work on more innovative things like the Google car etc. Has anyone progressed from a corporate integration programming environment to a more R&D based environment in their mid 40s? Whats it like? I'd like to think I would have excelled in my youth with less real world distractions, but not so sure now after years of 'dumbed-down' development." top
ydrol (626558) writes "I'm fed up with integrations breaking when endpoints change their SSL certificates. I understand end users needing the "trust" aspect of SSL, but I see most of the time integrations between servers in the enterprise estate just care about encryption and trust is an annoyance. (Obviously in some Sectors this might be unthinkable to ignore SSL "trust" in server to server communication, but in many its not important, is it?)
In the enterprise trust often comes through firewall default deny rules , and only allowing specific IP addresses. And a lot of man-hours are wasted when some cloud service decides to suddenly change their root certificate supplier.
So is it OK for servers (not users) to implicitly trust servers when it comes to SSL?
Also MITM attacks are kinda hard these days between fixed IP addresses aren't they?" top
ydrol (626558) writes "After building my new Core 2 Quad Q6600 pc, I was readly to unleash video conversion activity the likes of which I had never seen before. However I was disappointed to see a lot of the conversion tools either dont use SMP at all, or do not balance the workload evenly across processors, or require ugly hacks to use SMP (eg invoking distributed encoding options). I get the impression that Open Source projects are a bit slow on the uptake here? Which Open Source video conversion apps take full native advantage of SMP?
And before you ask, no I dont want to pick up the code and add SMP support myself, thanks." top
ydrol (626558) writes "Probably like most/.ers I type with most of my fingers and can get round 50 words a minute at full tilt.
Recently I've started my 4th attempt to learn to touch type (not by using tutor programs but just doing it), I'd given up before out of sheer frustration, but this time is a lot better — probably because I'm using a decent ergonomic keyboard. (bought to alleviate hand strain)
I'm at around 30 wpm and climbing. It is frustrating knowing I can type faster with my old 'style' but I hope to start hitting 70-80wpm one day.
I just wanted to ask — is it possible to type technical documents, code etc (with lots of weird characters) and still come out significantly better than my home-brew 50wpm typing style. Should I persevere?
I have noticed it does help me to focus more on what I'm thinking (rather than what I'm typing). The downside is that its harder to type when I have to use crappy keyboards (which is often the case when working at client sites etc.)" top
ydrol (626558) writes "Hi, I am designing a client server application for a customer and need to make a decision between two 'design patterns'.
I don't have on-hand access to suitable qualified peer-review so I thought I'd ask you guys/gals.
There is one server(Solaris 10), and approx 200 clients(Windows Desktops). (In reality its likely to be 10:) ).
Every now and then,(basically when a call arrives) the server must send a very small packet of information (approx 100 bytes) to the client.
My initial design was to have each client open a listen port to recieve the information. This listen port is within a predetermined range, and when successfully opened, the port number is send to the main server process. The main server process the stores the client listen port, IP address and username for reference, when it needs to send the packet.
Another approach, pointed out, is to simply use permanently connected sockets. I personally don't like this approach, but it does provide immediate notification when a client disconnects and makes the client code a bit simpler. However one will have to deal with TCP timeouts by sending keepalives or re-establishing the connection. But it is NAT/firewall friendly (although this is not an issue)
All of the telephony solutions I've seen use the client-listen socket approach, but they have to be able to scale to unknown numbers of clients. I expect much less than 200 clients at any one time.
Does the slashdot crowd have any thoughts, pros, cons of either approach?
e.g. Dealing with lost connections , scalability, robustness etc.
Or could anyone point me to a good reference on network design patterns?" top
ydrol writes "Training Companies (and training departments) seem to take great delight in handing over a pile of folders full of paper based traning materials at the end of a course. Presumably they dont want students stealing electronic copies of their work and traning others, as it is a lucrative source of revenue. The downside is, that it is often impracticle to refer to these traning notes after the course is over. Does anyone have any ideas — both for students (short of using psexec to grab the electronic notes from the teachers laptop ) , and for traning companies themselves on how we can improve the situation?"