Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Analyzing StackOverflow Users' Programming Language Leanings

Jerry Coffin Better basis for comparison (185 comments)

I'm a bit surprised that while a number of people have pointed out how lousy Tiobe is as an index of popularity, that nobody's pointed to an alternative. I'd suggest langpop.com as a considerably better alternative.

The most obvious points of superiority are simply documenting what they actually measure and how they combine the individual measurements to produce a final result. Although Tiobe doesn't document enough of what they do well enough to be sure, it looks like langpop.com covers a couple of types of sources that Tiobe doesn't (or at least doesn't imply they try to cover). One particularly interesting point is that they attempt to gather data about actual code, not just questions about code (e.g., they look at Freshmeat, ohloh, and Google Code).

Oh, and no, I'm not affiliated with Langpop.com (or Tiobe) in any way.

more than 2 years ago
top

1/3 of People Can't Tell 48Kbps Audio From 160Kbps

Jerry Coffin Re:Did they use the mosquito sound? (567 comments)

According to TFA, they did at least attempt to deal with this particular problem by providing all the listeners with a pair of quite decent Denon headphones. While you can (and some certainly do) argue about whether they're as good as the best you can get from Grado or Sennheiser (IMO, even the somewhat less expensive Sennheiser 650's are distinctly better), they're still way out of the league of speakers most people connect to computers.

more than 4 years ago
top

Software To Flatten a Photographed Book?

Jerry Coffin Re:Anonymous Coward (172 comments)

Better yet, avoid zoom lenses and get a dedicated macro lens. Nearly every camera manufacturer makes at least one macro lens, and often two or three of different focal lengths. For books, a ~100mm macro works quite reasonably.

The biggest problem is that a macro lens can be somewhat on the expensive side. If you want to stay cheap, Cosina makes a 100mm f/3.5 macro that looks and feels cheap, but has quite decent quality optics. This is widely available under various other names (Promaster, Quantaray, etc.) In fact, nearly any current, off-brand, 100mm f/3.5 macro lens is likely to be made by Cosina.

Compared to zooms, dedicated macro lenses are nearly always sharper, and (particularly) have extremely low distortion. FWIW, most of them work pretty decently as portrait lenses too.

about 5 years ago
top

Microsoft: Windows 7 Upgrade Can Take Nearly a Day

Jerry Coffin Ars NON-Technica (706 comments)

Perhaps Ars Technical should rename themselves "Ars incapable of comprehending anything even marginally technical". After mentioning the 1220 minute upgrade, they comment that: "We don't even want to know how long it would take if Microsoft had bothered doing the same test with low-end hardware."

Let's see now: the 1220 minute upgrade included 650 Gb of data, but the low-end hardware only included a 320 Gb hard drive. Does it really take any great brilliance to see a problem with that?

about 5 years ago
top

Microsoft: Windows 7 Upgrade Can Take Nearly a Day

Jerry Coffin Re:Mid-end?! Really?! (706 comments)

Mid-range I believe is the common one. Of course if you only want one end you can have entry level instead of low end.

And, FWIW, "Mid Range Hardware" is what it's called in the original blog entry -- the 'mid-end' nonsense is just another artifact of the /. summary process.

I think "low end" is much more descriptive than "entry level" -- if anything, it seems to tend toward the opposite. Entry level users have relatively new machines with fast CPUs and big hard drives. It's the more experienced users who can get by with horribly obsolete hardware...

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Well... (304 comments)

The Patent holder should have been required to submit their source code to get the patent to start with, [ ... ]

The patent office used to require submission of a model for any patent, but stopped, largely because storing all the models became cumbersome and expensive. In theory, it wouldn't need to be so cumbersome for source code, but see more about that below.

[...] Facebook should only have to submit its source to an independant third party for review.

That's almost certainly the case -- it'll really be turned over to the opposing counsel (i.e. attorneys) and they'll hire (non-Facebook) experts to examine the code. Those experts, in turn, will be required to sign a protective order, promising they'll only use it for the specific purpose of proving claims in the current case, not anything else.

I've been in that position a number of times, and can honestly say I've never even been slightly tempted to steal from the source code I looked at. Quite the contrary, such work is usually done on a tight enough schedule that you're working too hard to meet deadlines to really think about much else, and by the time a case is over, you never want to look at any of it again!

I realize this isn't how software patents work, but they need to start requiring source code submissions for the applications.

Perhaps it's best to consider how patents on software came to be accepted to start with. There was a patent on a machine for curing rubber. Somebody else built a machine that clearly did what that patent described -- but under control of software running on a CPU, instead of electronics designed specifically for that purpose. The case got to the supreme court, which ruled that the simple fact that the machine included a CPU and some software to control it didn't change the fact that it was a machine that executed the patent.

From a legal viewpoint, there's still not really a patent on software per se -- there's a patent on a machine that executes some software, or on a process of doing something that happens to be carried out by a computer under the control of some software.

As such, if you try to apply such a rule to "software patents", you almost inevitably have to apply it to patents on other kinds of machines. The minute you do that, however, you're back to the cumbersome, expensive storage of all those machines.

Facebook might be using something within their source that could be patentable that is not related to any existing patents, and they don't want to disclose their methods and routines to any outside party. This is not at all uncommon, we call these things "trade secrets". How do we know that this isn't just a ruse to get access to trade secrets or other unrelated code?

See above or just Google for "protective order". This is hardly the first court case involving information that might be sensitive...

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Can Facebook Obfuscate? (304 comments)

Can Facebook simply provide the source code in obfuscated [wikipedia.org] form?

Probably not. The current federal rules of civil procedure state that you:
(C) may specify the form or forms in which electronically stored information is to be produced.

Doing so would be a bad idea anyway -- giving a judge the idea that you're trying to cover up what you've done will almost always do more harm than good.

You might be surprised how little obfuscation would accomplish though. Quite a few cases are developed just from disassembled executables, with no source code at all.

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Of course (304 comments)

Congratulations. You've described a rule 34 inspection almost perfectly! Sadly, I'm not even being humorous.

Oh, there is one minor difference though: a rule 34 inspection is normally used for something like a large machine that can't reasonably be delivered to the other side.

The rest of it is pretty accurate though. For one example, I was involved in a case where the other side was ordered to produce a copy of a floppy disk -- so they sent a Xerox copy. This was recently enough that even the judge realized that was a problem, and told them that they needed to send a copy of the contents -- so they loaded executables into a text editor (Notepad, to be exact), and printed them out -- in a font that didn't have characters for many of the codes, so about half of it was the Windows Empty Square Box. The best part was the (literally) couple of thousand blank pages where a padding character (or something on that order) happened to correspond to a form-feed...

Tactics like that can be dangerous though -- the judge clearly recognized what was going on, and didn't like it a bit. For the rest of the case, he didn't cut them a break on anything. At the beginning of the case, I'd told our clients that IMO, the facts only favored them by about 60:40 or so, but by the end, there was virtually no way we could lose (and we didn't). In his decision, the judge even commented on the "assiduous and ongoing dishonesty" of our opposition (I think I'm quoting that correctly -- it was close to that anyway).

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Most OSes fall under the claims of this patent. (304 comments)

Jerry, Thank you for pointing out my omission of the networking requirement. I am not a lawyer, but I have worked on a few patent cases as an expert, so I know to read the patent before talking about it, even if I am not as careful as a lawyer at reading over it. :-)

I'm in pretty much the same position, except that I've been doing it long enough that I'm probably more anal than most lawyers about how I read claims...

I believe the networking requirement you mention will be fulfilled by any system which needs to use a network to validate user information from a central source, such as kerberos authentication or Windows Active Directory mechanisms. Of course, LDAP was mentioned in the patent, but these go beyond LDAP.

Active Directory (to use your example) certainly provides more than LDAP, but it does support LDAP, and from a viewpoint of the data and organization, it doesn't really provide a lot beyond the kinds of things LDAP can provide. It does add a lot of things like directory replication that LDAP doesn't address, but those aren't really relevant here. Those track things like whether a user is logged in, but this is talking about the applications and files the user has open. You could argue that those are equivalent, but I think with the specific mention of LDAP in the patent, they'd probably be fairly safe from that type of prior art.

As I said in my previous post, though, I'm not really trying to say the patent necessarily is valid though. Maybe Facebook can and will come up with some really compelling evidence of prior art. If the suit settles out of court, it might be for precisely that reason. Then again, it could be just the opposite -- that Facebook looked for prior art, and couldn't find anything even close, so they gave up. On the other hand, it could also be a simple matter of economics -- if Facebook figures it'll cost them five million dollars to defend themselves in court, and gets an offer to settle for two million, there's a pretty good chance they'll take it, even if they're pretty sure they could win in court.

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Yay! It's Ignorance Day! (304 comments)

Having a judge presiding on a case whose technical details he is wholly ignorant of strikes me as terribly dumb.

The judge is only supposed to decide questions of law, not of fact (questions of fact are decided by the jury). As such, the judge's expertise is supposed to be primarily in applying the law to the case at hand. Our legal system does recognize, however, that in a technical case, the judge frequently needs to understand technical details to be able to apply the law intelligently. The court is allowed to appoint a "special master", who is a neutral expert in the technical field to advise the court (i.e. mostly the judge) about the technical questions involved.

Of course, leaving all the questions of fact to a jury isn't necessarily a huge improvement. Turning technical questions about code over to a bunch of people who couldn't get out of jury duty doesn't exactly guarantee an accurate answer to those questions...

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Prior Art? (304 comments)

So they basically claim they have a patent on the one-to-many Foreign Key?

NO! In fact, the patent itself specifically cites a one-to-many relationship as already being known. The attempt at claiming coverage of a one-to-many appears to come only from the incompetent who wrote the summary.

about 5 years ago
top

Facebook Ordered To Turn Over Source Code

Jerry Coffin Re:Most OSes fall under the claims of this patent. (304 comments)

After reading through the '761 patent, any operating system which initiates a user working-space at login, e.g., a shell, will fall under the main claim of this patent.

It's refreshing to see somebody at least try to read the patent. I have a hard time believing anybody could mis-interpret it this badly though. Let's look at part of claim 1:

a computer-implemented tracking component of the network-based system for tracking a change of the user from the first context to a second context of the network-based system

How would an operating system with a shell qualify as a "network-based system"? Answer: since it's not network-based, it's not even close. Even something like logging in remotely isn't really network-based -- it's based on one computer, and happens to have a network between the CPU and the terminal. Here they seem to be talking about something that's truly network-based -- something intended exclusively (or at least primarily) for access over a network, and (quite possibly) the "server" isn't necessarily a single server, but itself an entire network. Exactly what "network-based" means for this patent doesn't seem entirely clear to me though -- and the patent specification doesn't really tell us either (the phrase "network-based" isn't mentioned in the specification). If that claim is part of the lawsuit, there will probably need to be a "Markman" hearing to decide how the claim should be construed. The court is required to presume that the patent is valid, and therefore attempt to construe the claims in a way that doesn't make prior art obvious -- and in this case, I think "network-based" is pretty easy to construe as meaning something that prevents a normal (or even remote) login from being prior art, so if the issue arises, there seems to be little question that the court would do so.

For those who've talked about tagging being an infringement, I'd note that "metadata tagging" is specifically mentioned in the "background of the invention" as being known related art. Likewise, those who've talked about a: "one to many relationship" (or various similar phrases), that's also mentioned in the background of the invention as already being known, not falling within the patent.

Now, I'm not going to try to argue that the patent is necessarily valid -- that's a question the court will probably need to address, and if Facebook's attorneys are doing their jobs, they'll (have specialists at prior art searching) put a fair amount of effort into researching reasonable possibilities of prior art. It does look, however, like if there is prior art, they probably really are going to have to do some serious work to find it. It might well exist -- quite a few people have been working on similar ideas around the same time, and it's entirely possible somebody else beat these guys to it. If it is out there, however, it's going to take quite a bit of hard, careful work to find it and show that it really does include all the limitations in the claims of the patent.

Just FWIW, I'd also note that to invalidate a patent, you don't just have to find prior art to one of the claims -- you have to find prior art for all the claims, or at least all the claims at suit. Looking at their dependent claims, we find things like:

30. The system of claim 23, wherein the first user workspace is associated with a plurality of different applications, the plurality of different applications comprising telephony, unified messaging, decision support, document management, portals, chat, collaboration, search, vote, relationship management, calendar, personal information management, profiling, directory management, executive information systems, dashboards, cockpits, tasking, meeting and, web and video conferencing.

I don't think Facebook provides all those, so they're probably not being sued over that claim, but for statuatory prior art to invalidate that claim, you'll need to find a web site (or something similar to a web site anyway) that provided every one of those applications by December of 2002 (and, of course, did the automated metadata-updating based on context, etc., cited in the earlier claims). It's certainly possible such a thing existed -- but if so, I'm pretty sure it's going to take some real work to find and prove it.

about 5 years ago
top

Open Source Camera For Computational Photography

Jerry Coffin Re:Do want (167 comments)

The principles of optics have been known for a long time. The theory of how to build good lenses has been around nearly forever. Most current fixed focal-length lenses can be traced back to 19th century designs like the Zeiss Protar and the Goerz DAGOR (and it's easy to find an exact formula for something like this, including curvature, index of refraction for each element, and so on). Zoom lenses are somewhat newer, but again, most can be traced back to Angenieux designs from the mid-1950's (or so).

The difficulty is in actually building a real lens to do what you want. Most of how to do that is widely known as well -- but when you're creating something where the basic unit of measurement is a single wavelength of light, virtually everything has to be done quite precisely. It's possible to do things by hand -- but building a reasonably modern zoom lens would be several years of work for one person. Back when I was younger, grinding a mirror was a rite of passage for virtually every amateur astronomer -- it took months of painstaking work to grind just one of a size that fairly easy to handle -- I can't imagine grinding a dozen or more, especially at the much smaller sizes necessary for a typical camera lens (keeping in mind that one element of a lens is already double the work of a mirror, since you have to grind and polish both sides, not just one).

Especially for a zoom, the mechanics are extremely non-trivial as well. You have to build a lens barrel that allows a dozen (or so) different groups of elements to move relative to each other, maintain exactly the right distances, never tilt or go off-axis, etc. I've simply disassembled, cleaned, lubed, and reassembled a couple, and even that's not for the faint of heart. Designing and building one is definitely a serious task.

about 5 years ago
top

Open Source Camera For Computational Photography

Jerry Coffin Re:True black and white sensor. (167 comments)

Today, bayer matrix and AA filters are glued on the chip in the manufacturing process, and it's impossible to get rid of it afterwards.

Not that it really makes a huge difference, but while the Bayer matrix is fabricated as part of the sensor chip, the AA filter is not.

Removing the color filters would not really affect the requirement for AA filtering either. And, just FWIW, there have been a few cameras built with Bayer filters, but not (physical) AA filters (e.g. the Kodak Pro dSLRs).

It would appear that in most cases, the AA filter doesn't really have much affect on final sharpness anyway. Just for example, if you read through a description of a test procedure, you quickly realize that very few pictures approach the maximum sharpness of which a current camera/lens combo is capable.

Finally, I'd note that if you really don't want a Bayer-pattern sensor, you can get a Fuji camera. They use a type of sensor originally developed by Foveon (which Fuji since bought out) that has individual sensor elements "stacked", to there's a red, green, and blue element at each pixel location. While Fuji's cameras are perfectly good and work quite nicely in general, they're not really a whole lot better than most others (if anything, they seem to lag a bit behind the field in general). At least when you look at a JPEG, however, those extra sensor elements aren't doing a whole lot of good -- a JPEG sub-samples the chrominance channels, so they have half the resolution of the luminance channel anyway. This isn't quite identical to a Bayer pattern, with twice the density of green sensors as red or blue, but it works out pretty close.

about 5 years ago
top

Open Source Camera For Computational Photography

Jerry Coffin Re:The lens mount (167 comments)

It appears that they are using a Canon EF or EF-s mount, but Canon is missing in the list of sponsors (Nokia, Adobe Systems, Kodak, and Hewlett-Packard). So they either reverse-engined the communication protocle between the lens and the camera, or they just skip that part all together.

While you're probably right, the conclusion doesn't necessarily follow from the stated facts. Specifically, for a short time, Kodak built a dSLR with a Canon EF mount, and IIRC, they licensed the technology from Canon rather than using reverse engineering to get it (FWIW, they also built one with Nikon F-mount). Although I rather doubt that Kodak is authorized to distribute that data (and the license has probably expired by now, so they probably can't even use it themselves) it's possible that they could have served as the "conduit" for that data.

about 5 years ago
top

Open Source Camera For Computational Photography

Jerry Coffin Re:Listen up camera manufacturers (167 comments)

My God, I'm pretty much as geeky as they come, but why, WHY do you need WiFi on a camera?

I'm not sure why most people care, but some studio photographers rarely use memory cards -- they shoot in "tethered" mode, where each picture is downloaded directly to a computer as its shot. Being able to do that without a cord is pretty darned handy. OTOH, this only makes sense in a few, rather specific, situations.

For most others, I guess a temporary version of the same could make some sense; instead of connecting a USB cable or pulling out the memory card and putting it into the computer, just set the camera near the computer to download your pictures.

One thing I'd like to see (not really restricted to wi-fi though) would be a way to actually do most camera setup on your computer. Many cameras' built-in menus are fairly difficult to navigate, and have little or no on-line help available. Even on a netbook, the screen and available drive space are much less limited...

about 5 years ago
top

Open Source Camera For Computational Photography

Jerry Coffin Re:Listen up camera manufacturers (167 comments)

The thing has the same size sensor as a DSLR, and seems to have a better-tuned JPEG noise reduction algorithm than the "full-size" Olympus bodies; apparently they have a new magic algorithm, and the E-P1 is the first camera to get it.

It's a micro four-thirds camera. The sensor is the same size as in a four-thirds SLR (e.g. an Olympus) but most dSLRs use a larger APS-C sized sensor. There are also a few full-frame dSLRs with full-frame (i.e. 24x36mm) sensors.

The low-light performance of the EP-1 seems to be quite good for its sensor size -- but that's definitely a serious qualification. To put things into perspective (no pun intended), consider the DxOMark tests. They don't have test results for the EP-1 yet, but they do for the Panasonic DMC-G1 (another recent micro-four thirds body). For low-light ISO performance, the DMC-G1 gets a rating of 463. For comparison, the Sony Alpha 900 (which has been panned for its relatively poor low-light performance) gets a rating of 1431. The best low-light performance currently available (by their tests, and probably in real use as well) is the Nikon D3, with a rating of 2527.

Note that DxO is primarily a vendor of raw-conversion software, so they start strictly from raw data, not a JPEG. While it seems likely that the JPEG conversion built into the EP-1 will do quite well compared to most other in-camera conversions, I'd consider it quite unlikely that it would beat the best separate raw conversion software. I'd add that I don't think DxO's is the best, but it's still quite a bit better than any in-camera JPEG software I've seen or heard of. The bottom line is that while DxOMark won't reflect the quality of the in-camera JPEG conversion, it does seem to give a fairly reasonable idea of the camera's image quality as long as you don't try to read too much into small differences.

The bottom line is that even if the EP-1 performs almost unbelievably well for its class, that's only likely to put it about even with relatively old APS-C dSLRs with relatively poor low-light performance (e.g. the Sony Alpha 700). Of course, in fairness, that's not necessarily a terrible thing -- few people really have much use for the low-light performance of a current APS-C dSLR, so coming reasonably close, with the size and weight of a micro four-thirds camera, is pretty impressive. Nonetheless, there's not really much chance of any micro four-thirds camera competing directly with a high-end dSLR any time soon.

about 5 years ago
top

New York MTA Asserts Copyright Over Schedule

Jerry Coffin Re:And here I was, thinking that... (395 comments)

[...] the people of New York [ ... ] could maybe even revoke the MTA's right to use it.

That wouldn't make any difference -- the MTA already seems to ignore it completely!

more than 5 years ago
top

Bjarne Stroustrup On Concepts, C++0x

Jerry Coffin Re:BIG need to dramatize (346 comments)

Concepts in C++ should have had the same effect for Generic Programming in C++ that C++ had for Object Oriented Programming in C. The should have democratized generic programming and brought forth a renaissance in C++ library design. Instead, petty politics killed the most exciting change to C-like languages in years.

I think you're overstating things a bit. It's certainly true that Concepts would have been an improvement, and would have made life a whole lot easier for a whole lot of people -- but let's face it, "renaissance" implies that we're currently in a dark age, and that's overstating things quite a bit. Point one, there's pretty substantial work being done right now. Point two, concepts are not going to allow an average programmer to just decide that they're going to recreate something like Spirit or Proto or Xpressive this afternoon, or anything like that.

I also the think "petty politics" is going overboard, at best. Howard Hinnant seems to have been the one who started the discussion at the last meeting that ultimately led to the removal of concepts from the draft standard -- and I find an accusation of his being involved with petty politics completely unbelievable. Quite the contrary, the (admittedly few) times I've talked with him, he struck me as a very level-headed person who was careful to evaluate ideas on their merits. Likewise, Beman Dawes, Bjarne himself, and (since you brought him up) Dave Abrahams don't seem to me exactly "petty politics" kinds of people either.

I'd also note that when it came to a vote, the overwhelming majority of committee members voted to remove them -- including (for one example) Douglas Gregor, the primary author of ConceptGCC, and one of the coauthors of both N1758 and N1849 (the two versions of the Indiana Proposal).

I think the vast majority of the committee simply believed that including concepts would delay the standard by a minimum of a couple years, and probably longer than that. A fair number of features originally intended to be included in C++0x have been removed for exactly the same reason, though quite a few others have localized enough effects that they're currently scheduled for a TR (Technical Report) rather than waiting for the next major update to the standard itself. Unfortunately, since they make fundamental changes to the type system, I don't think concepts can be added in the same way.

*(Dave - I mean that in the nicest sense... you've done a great job with Boost (oh, we need to jam again, too)).

I object your honor. Everybody knows a proper blues guitarist is short and fat, with black hair and a scraggly beard -- and this "Dave" is about as different from that as humanly possible! :-)

more than 5 years ago
top

Bjarne Stroustrup On Concepts, C++0x

Jerry Coffin Re:C++0x? (346 comments)

Now that the finalization has slipped past 2009, it's C++1x.

No, it is not C++ 1x. There's already a project that's called C++ 1x, which is slated to be the next revision of C++. There's also a plan for a Technical Report (TR) intended to be published a couple years (or so) after the C++0x revision, but before C++1x.

I'd note that it's actually fairly common for final approval of a language to happen some time after the date by which it's usually known.

  • Fortran 2008: Not finished yet, most recent draft dated 23 June 2009
  • Algol 60: Approved as an ISO standard in 1984
  • Simula 67: Approved as a Swedish national standard in 1987 (never approved as an ISO standard)

more than 5 years ago

Submissions

top

Jerry Coffin Jerry Coffin writes  |  more than 8 years ago

Jerry Coffin (824726) writes "The US Patent and Trademark Office recently released a new Draft Strategy document: "designed to foster American innovation and competitiveness at home and around the globe. This is a draft for which public comment — including suggestions, questions, and other input — is being solicited." Those who won't RTFA might at least try the official one page summary.

This may be your chance to exert influence over how the Patent Office carries out its job (within the confines of the law, of course). Believe it or not, they even seem to recognize the need for real change, such as: "...abolishing the one-size fits all examination system...""

Journals

Jerry Coffin has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?