×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Casting Doubt On the Hawkeye Ball-Calling System

timothy posted more than 5 years ago | from the nudging-the-odds-means-smarter-bets dept.

Software 220

Human judgment by referees is increasingly being supplemented (and sometimes overridden) by computerized observation systems. nuke-alwin writes "It is obvious that any model is only as accurate as the data in it, and technologies such as Hawkeye can never remove all doubt about the position of a ball. Wimbledon appears to accept the Hawkeye prediction as absolute, but researchers at Cardiff University will soon publish a paper disputing the accuracy of the system."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

220 comments

first! (-1, Troll)

Anonymous Coward | more than 5 years ago | (#23987293)

tennis sucks.

Why not use... (4, Insightful)

Kagura (843695) | more than 5 years ago | (#23987315)

Why not use a radio transmitter in the tennis ball (or soccer ball or whatever) to record its exact position? I am certain this has been discussed and I wouldn't be surprised if it's already in use. The article's "Hawkeye" just works by optical analysis.

Why not, it works for shopping carts (2, Funny)

davidwr (791652) | more than 5 years ago | (#23987345)

If you leave the store parking lot, one of the wheels locks.

Re:Why not, it works for shopping carts (4, Funny)

Anonymous Coward | more than 5 years ago | (#23987439)

So if one of the players tries to steal a tennis ball, they won't get very far?

Re:Why not, it works for shopping carts (0, Funny)

Anonymous Coward | more than 5 years ago | (#23987521)

So if one of the players tries to steal a tennis ball, they won't get very far?

Don't worry, tennis is probably the only sport not full of blacks. No one will be stealing anything for a while.

Re:Why not, it works for shopping carts (0)

Anonymous Coward | more than 5 years ago | (#23987733)

You are just trying to find any excuse to strip search the Williams sisters.

Re:Why not, it works for shopping carts (0, Offtopic)

Starburnt (860851) | more than 5 years ago | (#23988557)

Whereas I'd look for any excuse not to strip search the Williams sisters, lest I have to gouge my own eyes out.

Re:Why not, it works for shopping carts (2, Funny)

sconeu (64226) | more than 5 years ago | (#23987655)

Unfortunately, for my local supermarket, "leaving the store parking lot" is defined as entering the store.

Re:Why not, it works for shopping carts (0)

Anonymous Coward | more than 5 years ago | (#23987671)

Except all the stores I've ever seen that used in stop using it after about 6 months. I'm guessing that it has something to do with the lockups that happen when you are no where near the "yellow line". Great in theory, terrible in practice.

Re:Why not use... (5, Informative)

Bun (34387) | more than 5 years ago | (#23987461)

Why not use a radio transmitter in the tennis ball (or soccer ball or whatever) to record its exact position? I am certain this has been discussed and I wouldn't be surprised if it's already in use. The article's "Hawkeye" just works by optical analysis.

It's been tried in soccer. The latest attempts were prior to the last couple of World Cups IIRC, but the systems were plagued with problems, not the least of which was the survival of the transmitter.

http://www.gizmag.com/go/2790/ [gizmag.com]

Re:Why not use... (5, Interesting)

InadequateCamel (515839) | more than 5 years ago | (#23987535)

Further to that, if the transmitter can't survive in a soccer ball (where a well-struck shot probably moves around 120-130 kph) then there's no way it will handle travelling over 200 kph after a serve, followed by a (at least) 100 kph forehand return (a net >-300 kph in a fraction of a second!).

Also, a radio transmitter cannot account for the distortion of a ball upon impact, which will depend on velocity, angle of rotation, angle of impact, surface being played on, etc etc etc...

Re:Why not use... (4, Interesting)

icegreentea (974342) | more than 5 years ago | (#23987509)

Assuming you could build a radio transmitter tough enough to handle it...

With tennis balls, I imagine there would be problems with balance and the response of the ball. Especially with such a small ball, mounting a rugidtized radio transmitter (a ball probably has to go through 20gs or something) would probably mess with the balance and how the ball deforms. Not to mention, unless you can mount the system directly in the center of the ball, then you still have a margin of error the diameter of the ball. I imagine that would be fairly significant amount of error in tennis (perhaps on the same level as this Hawkeye system?) when calling the lines.

Re:Why not use... (1)

Amouth (879122) | more than 5 years ago | (#23987567)

wouldn't this be a good use for RFID tags/? they should be tough enough to handel it

Re:Why not use... (5, Interesting)

jfim (1167051) | more than 5 years ago | (#23987661)

Triangulation of radio signals is not accurate enough to give sub-centimeter accuracy and the added mass to the tennis ball would probably cause the players to have some objection to adding a radio transmitter into the ball.

The claim that the Hawkeye system gives an average of about four millimetres of error seems somewhat reasonable, given that we're getting accuracy greater better than two centimetres on detecting objects with a single camera with optics as large as the last segment of a typical pinky. (FWIW, here's a short demo [youtube.com] of what we're working on for our autonomous underwater vehicle [etsmtl.ca] )

However, the suggestion to display the error range for a particular shot and leaving the final decision to a human from TFA is quite reasonable and is how it should be. Blindingly trusting technology or discarding it altogether is unreasonable.

Re:Why not use... (-1, Offtopic)

jsprat (442568) | more than 5 years ago | (#23987849)

mounting a rugidtized radio transmitter (a ball probably has to go through 20gs or something)

rugidtized???

Read The Friendly Article (3, Informative)

MaliciousSmurf (960366) | more than 5 years ago | (#23987785)

Because that's not the issue. You'll always have uncertainty in systems. The study argues that the public perceives these systems as infallible, and therefore believe that technology can provide a final, absolute arbitration. The study is commenting on this tendency in lay people (i.e., people without specialized knowledge of the system), and warns against the corollaries that stem from such assumptions. Also, the title is bad: they are merely looking at the issue through the lens of Hawk-Eye, they're not looking at Hawk-Eye specifically. You may note that there is no analysis of the hawk eye system beyond a basic discussion of its function.

Re:Why not use... LASERS! (1, Funny)

Anonymous Coward | more than 5 years ago | (#23987793)

Just put high-powered lasers firing down the lines. If the ball is melted slag, it was out.

Re:Why not use... LASERS! (3, Funny)

janrinok (846318) | more than 5 years ago | (#23988091)

They have already experimented with this idea, but had problems keeping the sharks under control.

Re:Why not use... (5, Interesting)

Sethumme (1313479) | more than 5 years ago | (#23987825)

I still don't understand why there isn't more research on developing a surface for the out-of-bounds area that temporarily registers the exact impression of any impact on it.
I envision something that looks like a big LCD touch screen (but more durable). Every time something made contact with the active surface, a record of the ball's "footprint" could be recorded (and even temporarily displayed wherever it touched the surface). That would allow for highly precise measurement of the ball's landing position, and it wouldn't need to incorporate any new materials into the ball itself. The active surface would only need to be in the out of bounds area, and even then, it would only need to be half a foot wide in order to cover the important zone where the ball's landing position is questionable.

Re:Why not use... (0)

Anonymous Coward | more than 5 years ago | (#23987915)

That's original, although the sensors would need to be in the inside of the court as a ball is "in" whenever it touches the line, no matter how little.

There's also the problem of natural surfaces such as grass and clay. But on hard courts I think this might become a viable alternative to the hawkeye someday it it's durable enough.

Re:Why not use... (1)

Anpheus (908711) | more than 5 years ago | (#23988007)

We could call such a surface "Wet Paint" and use it whenever we need to determine if contact was made between the "Wet Paint" and another object.

Re:Why not use... (3, Funny)

Don_dumb (927108) | more than 5 years ago | (#23988545)

At Wimbledon you could have chalk for the lines and if you were unsure if the ball hit the line one of the competitors could point out that

"the chalk flew up"

Re:Why not use... (3, Insightful)

Drathos (1092) | more than 5 years ago | (#23987857)

Fox tried to do that with hockey back in the 90s in order to make the puck easier to see on TV (personally, I've never had a problem seeing the puck). The Glow Puck was horrible. When there was a jam up in the corner, it would literally be bouncing all over the screen. It also changed the way the puck performed on the ice. Because of the electronics and battery inside, they couldn't freeze the puck like they normally do, causing it to bounce a lot more and not slide on the ice as easily.

In a hollow sphere like a tennis ball, how would you keep the dynamics of the ball the same as they are when you add a transmitter to it? If you adhere it to the side, the ball will be off balance. If you create some internal structure/support to keep it centered, you change the deformation during a bounce/hit.

Re:Why not use... (0)

Anonymous Coward | more than 5 years ago | (#23987863)

maybe a radio isotope? track it by it's emmisions?

It doesn't have to be perfect (2, Interesting)

davidwr (791652) | more than 5 years ago | (#23987321)

The decision of which system to use: human, computer, human with computer check, computer with human check, committee vote, or what-not should be based on which has the lowest uncorrected error rate within limited time constraints.

This assumes there is another method, such as post-analysis of videotape, that can find almost all uncorrected errors or at least give some good indication of the uncorrected error rate.

Re:It doesn't have to be perfect (4, Funny)

Fred_A (10934) | more than 5 years ago | (#23988591)

This assumes there is another method, such as post-analysis of videotape, that can find almost all uncorrected errors or at least give some good indication of the uncorrected error rate.

Another method would be to use Radar instead of Hawkeye. Probably faster and more efficient as well.
(obscure reference).

The only solution (5, Funny)

Anonymous Coward | more than 5 years ago | (#23987325)

And ultra-accurate GPS like system that tracks the position of balls in nanosecond detail. They can call it Your Object Universal Remote Movement Observance Mechanism, or YOUR MOM for short.

Re:The only solution (0)

Anonymous Coward | more than 5 years ago | (#23988279)

The bad thing about YOUR MOM is that it's so big that it needs its own tennis court.

first (-1)

Anonymous Coward | more than 5 years ago | (#23987327)

Frost piss in yo' mouth you tennis bastads! 8 to love, and by love I mean luv.. *swallow*

If you are gonna troll, do it right (1)

davidwr (791652) | more than 5 years ago | (#23987363)

This is a tennis topic.

The score should be 15-love not 8-love you insolent clod!

RE: STFU (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#23988021)

My dick in only 8" long, you insensitive clod!

Other applications? (1, Interesting)

Merls the Sneaky (1031058) | more than 5 years ago | (#23987347)

Can this be applied to something useful. You know besides whether or not someone was out in a game of tennis?

Re:Other applications? (1)

Joebert (946227) | more than 5 years ago | (#23987425)

Yes. We will be able to determine once and for all whether or not they just grabbed my ass.

Re:Other applications? (2, Funny)

Anonymous Coward | more than 5 years ago | (#23987519)

We will be able to determine once and for all whether or not they just grabbed my ass.

You're a guy reading slashdot by yourself on a saturday night. It doesn't take any special technology to know the answer to that question.

Re:Other applications? (1)

NamShubCMX (595740) | more than 5 years ago | (#23987853)

Yea technology for sports is ridiculous. Better find new ways of killing other human beings more efficiently.

Re:Other applications? (5, Funny)

InfoHighwayRoadkill (454730) | more than 5 years ago | (#23988357)

Yes, some people also want to use Hawkeye for some decisions in cricket, the sport that first used it. However the margin of error is far greater (approximately +- 2 inches) in cricket as the cameras have to be a lot further away due to the size of the pitch.

Also Hawkeye finds it hard to pick up swinging, seaming and spinning balls. Basically anything that deviates off its theoretical trajectory either in the air or off the playing surface. Both of which are vital in the LBW decisions where the TV companies and doubtless the Hawkeye people would want to see it used.

Obviously cricket is a far more useful game than tennis so does this answer your question?

Re:Other applications? (3, Insightful)

stranger_to_himself (1132241) | more than 5 years ago | (#23988553)

Yes, some people also want to use Hawkeye for some decisions in cricket, the sport that first used it. However the margin of error is far greater (approximately +- 2 inches) in cricket as the cameras have to be a lot further away due to the size of the pitch.

The other key difference in cricket is that Hawkeye is used to predict where the ball would have gone had it not hit a pad, whereas in tennis it only needs to say where the ball actually was.

Re:Other applications? (2, Informative)

Don_dumb (927108) | more than 5 years ago | (#23988585)

The reason it isn't officially used in cricket is because it would be used to predict the path of the ball had someone's legs not interrupted it. Whereas in tennis it is simply used to account for where the ball actually went.
Obviously just tracking a ball is a more definite science than the prediction of something that didn't happen (but could have). Especially as anyone who knows about cricket will tell you is that the path of the cricket ball is 'mysterious'.
I once heard a cricket commentator interviewing the inventor of Hawk-eye (a Mr Hawkins) and asked him how accurate the system was - he said something along the lines of "in testing it has been incredibly accurate" which I found quite weak as I was expecting tolerances of so many mm deviation per second.

In cricket it is only used as a commentary tool generally proving that the umpires get it 'right' most of the time anyway.

Re:Other applications? (1)

stranger_to_himself (1132241) | more than 5 years ago | (#23988525)

Can this be applied to something useful. You know besides whether or not someone was out in a game of tennis?

Must of the technology that underlies Hawkeye was originally designed as a missile tracking system. Although personally I think sports are far more worthwhile than warfare.

major league base ball umpires union does not like (1)

Joe The Dragon (967727) | more than 5 years ago | (#23987375)

major league base ball umpires union does not like systems like this and systems like that are not 100% also there stuff that is hard to make calls that can be 100% done by a bot.

http://query.nytimes.com/gst/fullpage.html?res=9406E6DE1F39F933A15754C0A9649C8B63 [nytimes.com]

http://query.nytimes.com/gst/fullpage.html?res=9D00E1D61130F933A1575AC0A9649C8B63 [nytimes.com]

http://findarticles.com/p/articles/mi_m1208/is_24_227/ai_103378465 [findarticles.com]

http://en.wikipedia.org/wiki/QuesTec [wikipedia.org]

Re:major league base ball umpires union does not l (1)

Raenex (947668) | more than 5 years ago | (#23987805)

The New York Times references are several years old. The Wikipedia article you mention says the controversy has died down and the system has brought the intended benefits.

I wish they would just use an automated system in all the parks instead of relying on the umps. I also wish they would use a standard strike zone, instead of one that changes based on the batter. It'll never happen, though.

Re:major league base ball umpires union does not l (1)

KGIII (973947) | more than 5 years ago | (#23987831)

Baseball would be FAR more difficult, if not impossible with a generic system, or at least that is my opinion. In tennis, football, etc. you have an exacting standard. Baseball fields are all different. Each one has different sizes, even the actual distance of the basepath may vary in distance though it isn't supposed to. Each player's height and stance is different meaning that there would be a great deal more difficulty if used to determine (in real time at least) the strike zone. I think the best solution that I have seen so far has been the American Football's system of review after a coach's challenge or those called by the head officials in the latter minutes of the game. Please keep in mind that my knowledge of tennis is limited to just a couple of years back in high school but my baseball experience is much greater so I may have a heavily biased view of the complexities involved with the variances of baseball.

Re:major league base ball umpires union does not l (1)

crossmr (957846) | more than 5 years ago | (#23988131)

Review? are you kidding?
Do you know how many "close" pitches there are in an average baseball game? I think they were working on lowering the average time of a game, this would shoot it way back up.

Re:major league base ball umpires union does not l (1)

KGIII (973947) | more than 5 years ago | (#23988201)

You win. I hadn't thought of that. Though, again, I didn't mention that the review system for football should be used for baseball so I have a way to deny all accusations. *grins*

(Really, I hadn't thought of that. Wow. That'd ruin it. I figured that that system had worked well for that game.) I played in college and actually wanted to consider an attempt at a career before admitting to myself how unrealistic it was and so I have a love affair with baseball so I'm at a loss of how we can automated the refereeing. I see there being so many variables that it'd be impossible to automate the refereeing without altering the game in and of itself. Gots any ideas? However, no no no.... The replay system used in football would suck for baseball though it works well for that game, I think. If anything I think there should be a "pitcher nut scraching" time limit as well as a single step into the batter's box. (Yeah, some people hate me for that one. You step into the batter's box and step out it should be a strike like a balk. With the new guy pitching with both hands we may need a few minutes resolving it but I'd say offense goes first so the hitter picks first but that's WAY too much digression here.)

Re:major league base ball umpires union does not l (1)

crossmr (957846) | more than 5 years ago | (#23988519)

The only way to do it would be to develop the system so that is as accurate as possible. Sensors in the uniforms to measure the strike zone, sensors in the bat to determine if they check the swing, etc.

If you can't get it to the same accuracy or better than an umpire, forget it.
After doing that, you'd have to disallow reviews. That would speed up the game. The umpire NEVER changes his call when anyone argues with him, so people could storm out and have just as futile a conversation with the umpirebot 2000

Re:major league base ball umpires union does not l (4, Insightful)

DriedClexler (814907) | more than 5 years ago | (#23987935)

I'm confused. Why would umpires oppose a technology that can automate the refereeing of a game? It just doesn't make any sense.

Re:major league base ball umpires union does not l (0)

Anonymous Coward | more than 5 years ago | (#23988231)

I dunno, I guess some people just like to keep their jobs.

Anonymous Coward (3, Insightful)

Anonymous Coward | more than 5 years ago | (#23987393)

They're reproducing stuff that's already known. Yes, Hawkeye can be inaccurate. However, it's MORE accurate than linesmen and certainly the chair umpire. That's why it's used as the definitive word.

I'd certainly prefer it to be used otherwise - the best way would be to give the chair umpire the information from HawkEye and then let him decide whether to use it or not at any given time, properly educated about the types of errors the machine can make - but that wouldn't be as flashy, would it. So the advertisers wouldn't go for it.

Re:Anonymous Coward (1, Interesting)

martin-boundary (547041) | more than 5 years ago | (#23987453)

A system such as Hawkeye CANNOT BE MORE ACCURATE than humans. From the link in the article, the Hawkeye system uses 5 cameras to compute the 3D position of the ball. That's an overdetermined system of equations, which cannot have a unique solution due to observation errors in the camera views.

So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares, and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans, just different. FYI, there are plenty of other arbitrary rules that work, eg least absolute errors, maximum entropy, etc.

Re:Anonymous Coward (4, Insightful)

the_other_chewey (1119125) | more than 5 years ago | (#23987557)

The accuracy has absolutely nothing to do with the overdetermination of the system.
If it had, it would be simple to reduce the number of cameras to three, and boom - perfect position.
That's obviously not how it is.

And of course does the number of cameras increase the precision of the computed position - the principle
is exactly the same as for GPS, where more satellites are better as well.

Using a certain fitting method (least squares, least absolutes etc.) has nothing whatsoever to do
with something like "complementing the equations", that's just necessary because no measurement is perfect -
You are arguing that multiple measurements do not increase the accuracy of a computed average because there
are multiple averaging algorithms to choose from.

Bullshit.

Re:Anonymous Coward (1)

drew30319 (828970) | more than 5 years ago | (#23987637)

And of course does the number of cameras increase the precision of the computed position - the principle is exactly the same as for GPS, where more satellites are better as well.


Since we're only dealing with three dimensions, why would any number of satellites > 3 be more precise for GPS?

Re:Anonymous Coward (3, Informative)

DrJimbo (594231) | more than 5 years ago | (#23987699)

Since we're only dealing with three dimensions, why would any number of satellites > 3 be more precise for GPS?

If the errors are random and follow a normal distribution (two big ifs, I admit) then even in one dimension, the error is reduced by a factor of 1/sqrt(N) where N is the number of measurements.

The same general idea applies to higher dimensions. If you can avoid systematic errors then the more measurements you take, the more accurate your final result will be. If you are interested in the gory details of the higher dimensional case, you should take a look at singular value decomposition [wikipedia.org] .

Re:Anonymous Coward (4, Informative)

the_other_chewey (1119125) | more than 5 years ago | (#23987709)

Since we're only dealing with three dimensions, why would any number of satellites > 3 be more precise for GPS?

Because we are dealing with reality as well - where no measurement is perfect.
Geometrically, three sats indeed are enough, but in reality:
More measurements -> smaller error bars -> better position.
The alternative to more sats would be not to move and to take more measurements over time.
But that would render GPS useless for most applications ;-)

Additional trouble with the "stay and wait" method: Those nasty satellites move over time,
introducing different errors that can not be eliminated as easily by simple averaging.

That's also why ultra precise GPS surveying records the satellite data and waits for the week it takes
to make the actual orbital data (as measured, and not just as predicted) available before computing
the position, thereby elimiating (well, at least reducing) another source of error.

In statistics, the only thing beating multiple measurements is even more measurements.

Re:Anonymous Coward (2, Interesting)

pyrrhonist (701154) | more than 5 years ago | (#23988163)

Short answer: GPS units just make an estimate of your position, not calculate it exactly. More satellites make for a better estimate.


Long answer: The ranges calculated in GPS are estimates, because the clocks in the receiver aren't very precise. A small offset in the timing can cause a large error in the calculated distance (if the clock is off by 1/1000th of a second, you're actually 200 miles away from where you think you are). This is why GPS usually uses 4 satellites. If the receivers all had atomic clocks on them, every set of measurements from any three satellites would end up at the same exact point, because the clocks are so precise. The quartz clocks in GPS receivers drift out of sync with the clocks on the satellites, and this drift is enough to cause pretty large inaccuracies. In other words, if you measure the ranges from three satellites, and then subsequently measure the range from a fourth, the fourth satellite's measurement will not align with the other three. When this happens, the GPS unit makes adjustments to the 4 measurements until they all align in a single point. This effectively eliminates the clock issue. To get an even more accurate measurement, the GPS receiver will try to acquire as many satellites as possible and take measurements in groups of 4. This helps eliminate other errors caused by interference, atmospheric anomalies, highly reflective goats, etc.

Re:Anonymous Coward (1, Informative)

martin-boundary (547041) | more than 5 years ago | (#23987681)

The accuracy has absolutely nothing to do with the overdetermination of the system. If it had, it would be simple to reduce the number of cameras to three, and boom - perfect position. That's obviously not how it is.

Sorry, but you don't know what you're talking about.

Any system of equations with more equations than unknowns is called overdetermined. If you have 5 cameras and 3 coordinates, that leads to an overdetermined system.

The accuracy of the cameras matters, because if the reported measurements were completely accurate, then some of the equations in the system would be linearly dependent on others, and as long as the cameras are intelligently placed, there would be precisely one solution.

Observation errors in the camera measurements however produce an inconsistent set of equations, hence the usual problem of overdetermination.

And of course does the number of cameras increase the precision of the computed position - the principle is exactly the same as for GPS, where more satellites are better as well.

No it doesn't. When you combine observations, there is ALWAYS the question of what criterion do you use to combine them. That's ARBITRARY. The GPS is no exception: some ARBITRARY method of combining observations is used.

By far the most popular criterion is least squares, which is simple but not as robust to perturbation as least absolute deviation for example.

Using a certain fitting method (least squares, least absolutes etc.) has nothing whatsoever to do with something like "complementing the equations", that's just necessary because no measurement is perfect -

I don't know what you mean by complementing the equations. A fitting method is used when there are too many equations, and you'd like to essentially ignore the redundancy embodied in them, by introducing another criterion. Is that what you mean? The criterion is arbitrary, so there is no universal way of complementing equations. People pick the criterion they like, usually the one that involves least effort on their part.

You are arguing that multiple measurements do not increase the accuracy of a computed average because there are multiple averaging algorithms to choose from.

Precisely. These multiple averaging methods give different answers. Which one is right? There isn't one. In particular, none of them is more accurate than a human. Just different.

addendum (1)

martin-boundary (547041) | more than 5 years ago | (#23987763)

I just reread the bit about complementing equations, and I'm actually the one who used that word first, so you're off the hook on that point - I shouldn't be carrying on three unrelated conversations at the same time ;)

Re:Anonymous Coward (1)

the_other_chewey (1119125) | more than 5 years ago | (#23987799)

Using a certain fitting method (least squares, least absolutes etc.) has nothing whatsoever to do with something like "complementing the equations"

I don't know what you mean by complementing the equations.

Neither do I, that's an expression you introduced, and the reason why I put it in quotation marks.

You are arguing that multiple measurements do not increase the accuracy of a computed average because there are multiple averaging algorithms to choose from.

Precisely. These multiple averaging methods give different answers. Which one is right? There isn't one. In particular, none of them is more accurate than a human. Just different.

Fascinating. I regularly make measurements in the micrometer scale using a microscope, and easily increase the precision of my results by repeating them.
So I should just trust my gut feeling, statistics be damned? Thanks, that'll really speed up my work.

I'm really not convinced that I am the one with no clue...

Re:Anonymous Coward (2, Interesting)

martin-boundary (547041) | more than 5 years ago | (#23988301)

Fascinating. I regularly make measurements in the micrometer scale using a microscope, and easily increase the precision of my results by repeating them.

Look, statistics is much more complicated than averaging. Do you know where the averaging rule that you use comes from? It's the maximum likelihood estimator [wikipedia.org] : It's a function of the observations which is obtained under certain assumptions on the physical process (which in your case would typically be a Gaussian distribution of errors, all independent).

So I should just trust my gut feeling, statistics be damned? Thanks, that'll really speed up my work.

It makes no sense to claim that accuracy is improved by averaging without subscribing to those or similar assumptions. In fact, there are other rules, for example you might add some extra dummy values to your measurements if you have a Bayesian prior assumption, etc. The point is what works in your case need not work in other, superficially similar, problems, especially if the risk function is different.

When one solves an overdetermined system, one implicitly includes some assumptions [wikipedia.org] . The answers one gets are only as good as the assumptions one puts in, garbage in garbage out etc. There simply is no universal solution to an overdetermined system.

Re:Anonymous Coward (2, Interesting)

eh2o (471262) | more than 5 years ago | (#23987941)

Combining observations isn't arbitrary, its based on prior knowledge of the underlying statistics and measurement methods. If the multiple measurements are identical with normally distributed error, for example, the mean can be used. If the measurement is subject to random catastrophic failure (e.g. bit flipping), then the median is a good choice. In the Bayesian method you form a composite probability distribution by combining conditional or joint probabilities. In fact, if you do it wrong, you can make the final answer *worse* than any of the original measurements (this is called catastrophic fusion). The method is NOT arbitrary--making that assumption will get you into big trouble fast.

By the way a system like this has potentially many, many more observations than just five--since it also uses position and velocity estimates from previous frames to compute the most likely next position of the ball. With five high-speed cameras combining data into a Kalman filter you are looking at hundreds if not thousands of measurements of the ball trajectory, which will give you enough data to estimate subtle qualities like the spin of the ball and so on (by extension the number of variables is by no means limited to three, since one can estimate any number of higher order features--e.g., velocity, acceleration, angular velocity, wobble, etc).

It isn't hard to engineer machines that surpass the accuracy of a human in a variety of tasks, and the question of "which one is right" is not merely subjective but described up by a body of math known as signal detection theory. This math by the way came out of the subfield of psychology dedicated to measurement of the thresholds of discrimination by human judgement with respect to physical phenomena--psychophysics. The resolving power of a measurement system can be quantified by its discriminability index, and decision-making processes based on that information are described by positions along the corresponding ROC (receiver operating characteristic) curve.

Re:Anonymous Coward (1)

martin-boundary (547041) | more than 5 years ago | (#23988409)

Combining observations isn't arbitrary, its based on prior knowledge of the underlying statistics and measurement methods.

It's also based on the type of loss function being used, which is arbitrary, and many people would say that prior knowledge, being subjective, is a classic example of arbitrariness.

Take a simple example, a Kalman filter of a single constant value with a perturbation and a trivial observation model. Your filter has two parameters, for the variance of the perturbation, and the variance of the observation error. Depending on how you choose those values, your estimate will oscillate wildly or be very smooth. How is this not arbitrary?

It isn't hard to engineer machines that surpass the accuracy of a human in a variety of tasks, and the question of "which one is right" is not merely subjective but described up by a body of math known as signal detection theory.

Only if you have a well defined problem. For example, your typical signal detection problem is formulated within a Gaussian/least squares/Hilbert space framework. Sure, the math is beautiful and widely applicable, but if I don't tell you that the criterion has to be MSE, and you go ahead and assume that anyway, talk of accuracy becomes meaningless. You might be solving the correct problem, or you might not.

Ultimately, the problem formulation within the rules is not sufficiently precise in tennis to be able to assert that 5 cameras using Kalman filtering to estimate the trajectory of the ball is actually solving the same problem that the referee is solving. Obviously, this only matters in corner cases, and that's exactly when precise specifications break down. How do you include the umpire in the physical model? If you don't, you're implicitly arguing for a change in the tennis rules away from human umpires, in favour of an idealized physical specification.

Re:Anonymous Coward (1)

davester666 (731373) | more than 5 years ago | (#23987675)

Hawkeye doesn't need to be 100 % accurate or dumb itself down to 3 camera's to estimate the location of a 2.5 inch diameter ball traveling over 100 miles per hour with greater accuracy than any particular human eye under ideal circumstances [say, no players, with a ball machine shooting them over the net]. Then mix in the environment like the location and movement of the player and their racquets, dust, wind, crowd noise, and the sun. Hawkeye is better equipped to deal with and ignore all these things as well.

Just saying that Hawkeye uses [or may use] some particular method of estimating where the ball hit the ground is not proving that Hawkeye is more accurate, less accurate or equally inaccurate. It's an apple's to oranges comparison, as a person uses a rather different method for estimating the ball flight than the Hawkeye system.

The only way to establish whether the Hawkeye system is more or less accurate than human eye is by systematic comparison of the Hawkeye system vs humans or some other method of determining where the ball landed, such as paint on the ball or a markable surface to determine the range of error for both. I would guess that both the ATP and the WTA both probably went for evaluating this empirical comparison rather than just saying, well, the Hawkeye is not perfect and it estimates where the ball is, and a human also estimates where the ball is, therefore the Hawkeye can't possibly estimate better than a human.

Re:Anonymous Coward (2, Insightful)

Rakishi (759894) | more than 5 years ago | (#23987751)

A system such as Hawkeye CANNOT BE MORE ACCURATE than humans.

Of course it can be, humans are not 100% accurate and even human eyes aren't 100% accurate.

From the link in the article, the Hawkeye system uses 5 cameras to compute the 3D position of the ball. That's an overdetermined system of equations, which cannot have a unique solution due to observation errors in the camera views.

That it's overdetermined doesn't matter since in the end the error of those combined non-unique solutions is still less than that of a non-overdetermined system of the same cameras.

So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares, and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans, just different. FYI, there are plenty of other arbitrary rules that work, eg least absolute errors, maximum entropy, etc.

That it uses an arbitrary rule says NOTHING about it being capable of more accuracy than a human. Accuracy is easy to determine (via experimentation if you wish) and claiming that we somehow magically can't measure it is idiotic. For example a checkers program plays the game differently than a human but one can still claim the program is better than a human (since no human can beat the best checkers program from what I remember). It may be possible that neither humans nor this system are better in every case but that still doesn't mean one can't inherently be better (ie: if the cameras are accurate enough). In fact even if one doesn't dominate the other one can still uses some measure to determine which is more accurate (on average, etc.).

Re:Anonymous Coward (1)

martin-boundary (547041) | more than 5 years ago | (#23987893)

Of course it can be, humans are not 100% accurate and even human eyes aren't 100% accurate.

You're missing the point. Accuracy makes no sense unless you include the error criterion. Any estimation algorithm has an arbitrary error criterion, as do humans. Neither is more accurate than the other, they're just different estimation procedures.

That it's overdetermined doesn't matter since in the end the error of those combined non-unique solutions is still less than that of a non-overdetermined system of the same cameras.

No, talking about the size of the error makes no sense if you haven't specified a regularization criterion. Now choosing the criterion is essentially equivalent to choosing what the theoretical answer should be, so it's circular reasoning to claim that the resulting error would be smaller.

Accuracy is easy to determine (via experimentation if you wish) and claiming that we somehow magically can't measure it is idiotic.

No it's not. If it were, there would be no issue. The issue is that these systems converge to some estimate, but the estimate need not be meaningful.

Suppose you aim a rocket to the moon, and you are told that a new targeting algorithm is more accurate. So you switch, and the rocket ends up flying into the sun. What went wrong? You neglected to ask what the new algorithm was targeting...

For example a checkers program plays the game differently than a human but one can still claim the program is better than a human (since no human can beat the best checkers program from what I remember).

In a game of checkers, both players are targeting the same outcome. In a position estimation problem, when the optimization criterion is not specified, two competing methods may not target the same answer.

For example, do you want to minimize the Euclidean distance of the estimated position against the true position, or do you want to minimize the error in a single coordinate only, or maybe you want to minimize the roughness of the trajectory of the ball over some time interval, or .... All these methods give different answers, not because some are more accurate than others, but because they actually converge to different solutions.

Re:Anonymous Coward (2, Insightful)

Rakishi (759894) | more than 5 years ago | (#23988017)

You're missing the point. Accuracy makes no sense unless you include the error criterion. Any estimation algorithm has an arbitrary error criterion, as do humans. Neither is more accurate than the other, they're just different estimation procedures.

No, talking about the size of the error makes no sense if you haven't specified a regularization criterion. Now choosing the criterion is essentially equivalent to choosing what the theoretical answer should be, so it's circular reasoning to claim that the resulting error would be smaller.

That's a silly argument because it basically says "nothing is better than a human because a human is no optimized for the problem" or "we can never determine what is better because we need to first determine what better is and there is more than one possibility." You can I'm assuming create a system that is in fact more accurate across all error criteria but that's a separate point. There already is an error criteria in place since human judges must somehow be chosen and evaluated. The computer system in fact uses a different error criteria as a stepping stone because the true measure is not as easy to write in an algorithm.

No it's not. If it were, there would be no issue. The issue is that these systems converge to some estimate, but the estimate need not be meaningful.

This system is designed in the end to determine if a ball is on one side of a white line or the other. THAT is the error criteria and everything else is irrelevant or just a step to that end goal or an easier to measure version of that end goal. Interestingly enough the comparison isn't versus humans but rather versus humans using the existing computer system.

For example, do you want to minimize the Euclidean distance of the estimated position against the true position, or do you want to minimize the error in a single coordinate only, or maybe you want to minimize the roughness of the trajectory of the ball over some time interval, or ....

This is irrelevant to using over determination since the problem probably also exists with even 3 cameras. I'd guess that their placement or design parameters can easily lead to different error measures being optimized.

Re:Anonymous Coward (4, Insightful)

Moridineas (213502) | more than 5 years ago | (#23988043)

I'm willing to concede that you are talking theory at some level I don't fully grok. What I think you're completely missing in this discussion stems from your original statement that"system such as Hawkeye CANNOT BE MORE ACCURATE than humans", which does not seem to be possibly true by any standard definition of these words that I am familiar with.

You can talk about "error criterions" and odd offtopic tangents about targeting algorithms etc, but the bottom line is, your original statement is completely wrong.

You say "So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares, and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans".

That's both wrong and illogical. Yes, Hawkeye is estimating a solution, and using a "arbitrary" (again, this is utterly bizarre and incorrect word choice--the makers of Hawkeye have presumably done a great deal of testing to pick an algorithm, which is NOT arbitrary) method to estimate. However, if Hawkeye ESTIMATES the correct answer more often than a human judge then, Hawkeye is more accurate than a human judge. The methods it uses are really completely irrelevant to the final answer.

So in short, it seems that this is a discussion in your usages of "accurate," "error," "arbitrary," etc are different than the rest of the people in the thread.. Please let me know if I'm misinterpreting something though!

Re:Anonymous Coward (2, Insightful)

TapeCutter (624760) | more than 5 years ago | (#23988339)

"system such as Hawkeye CANNOT BE MORE ACCURATE than humans.......You're missing the point."

As the other poster implied, your first assertion is what the "point" is. Speaking of points, your last paragraph doesn't seem to have one, it basically says different problems have different equations and answers.

I would also suggest that an emprically derived 4mm error is demonstratably more accurate than any human and no amount of irrelevant math will change that. If what you and TFA are trying to say is, "it's foolish to belive technology is foolpoof", then a primative "duhhhhh" response is all I have.

Trivia: The aggrived player must call for the hawk-eye decision if he disputes the umpire, each player is only allowed 3 disputes per somethingorother (lady friend is the tennis fan).

Re:Anonymous Coward (5, Insightful)

SnowZero (92219) | more than 5 years ago | (#23987875)

A system such as Hawkeye CANNOT BE MORE ACCURATE than humans. From the link in the article, the Hawkeye system uses 5 cameras to compute the 3D position of the ball. That's an overdetermined system of equations, which cannot have a unique solution due to observation errors in the camera views.

Luckily there's a 100+ year old discipline called statistics, and 60+ years of literature on tracking to help you out in these cases.

So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans, just different. FYI, there are plenty of other arbitrary rules that work, eg least absolute errors, maximum entropy, etc.

While I can't speak for the designers of the Hawkeye, in tracking there are very good reasons to choose one form of error minimization versus another. It only seems arbitrary because you are not informed on the subject, but there's plenty of free papers out there to read and discover.

To explain current methods, please start out with this paper [google.com] , in particular Figure 2, you'll see that the sort of errors you get from a camera are indeed well fit by a Gaussian. While a camera's perspective transformation is not purely linear (and various forms of distortion make it also non-linear), a good camera with a decent lens estimating the ball location within a limited area is well approximated by a linear model (and you can characterize just how much the error is). Now, a bunch of cameras with a Gaussian error distribution in the image plane with a linear projection out into the world is still a Gaussian (with a transformed covariance matrix). You can then multiply the independent measurements from multiple cameras to get a better estimate. Add a time series to that and apply this recursively and you get a Kalman filter [unc.edu] , something invented for aerial tracking and still in widespread use today. If something is good enough for missiles to intercept other missiles, it ought to be good enough for a tennis match.

If the linear approximation not good enough for you, you can use a Rao-Blackwellized Kalman filter. If that's still not good enough because you want to use another error distribution or non-linearizable dynamics, set up a particle filter with a whole lot of particles and enough CPU to simulate it. The point is that what you call arbitrary is a well studied field which is many decades old. You'd be best served by learning about it first before you cast away all that work. I'm not a "tracking" person, just a user of there work. When a field of science has done its job well enough that it has become common engineering, and you can go look up whatever you need in books, with all the derivations, caveats and tradeoffs laid out there for you to see, I would say that that field has done a pretty good job.

The whole media story around this paper is ridiculous. It's a paper from a social sciences department about how the public does not understand the fallibility of these machines due to noise. That's all this paper is about: Hawkeye has error. I hate to break it to the uninformed, but all measurement systems have error. From Galileo [wikipedia.org] to Gravity Probe B [wikipedia.org] , your results can only be as accurate as your measurements, calculations, and statistical models will allow. You can decrease error with various methods, but you can never completely eliminate it. People should not be able to get out of high school without understanding accuracy on measurements, and some rudimentary statistics, but unfortunately our education system hasn't been able to reach that goal. As a result, the public doesn't understand error, and might come to believe something is infallible when it is only highly accurate. Rather than deride Hawkeye, we should use this to educate the public in a non-sensationalist way. Of course, like in high school science class, they might not pay attention.

So, sure Federer can challenge Hawkeye's call, but the most you can say is what the probability of the ball being in or out is, or use the location of maximum likelihood. Maybe that should be reported on TV; Though it would confuse many watchers, maybe we can help them learn. In the end, you have to go with the highest probability or whatever rule is agreed on. As long as it is more accurate than human calls (as measured through experiments), and it is not biased (unlike almost any human referee, who will always have some emotional component to calls, no matter how small), the only way you'll do better is by building a more accurate calling system. If you think you can, get to work; there's definitely a market for it.

Re:Anonymous Coward (2, Interesting)

martin-boundary (547041) | more than 5 years ago | (#23988069)

Luckily there's a 100+ year old discipline called statistics,

Yes, and this discipline depends on something called decision theory [wikipedia.org] , which in turn depends on an arbitrary choice of loss functions.

While I can't speak for the designers of the Hawkeye, in tracking there are very good reasons to choose one form of error minimization versus another.

None of this matters one bit if these reasons are not compatible with the sporting rules in the problem at hand. To be pedantic, if the rules say that an umpire has the final word (for example), then a tracking system which doesn't optimize for the same criteria that the umpire uses himself is irrelevant. To be even more pedantic, if one claims that the tracking solution is superior to an umpire's criteria if those criteria differ, one is merely trying to change the rules of the game.

Thanks for the links, but I am actually familiar with (and have used) Kalman filtering in the past. The issue I raised is not an engineering one. It occurs before engineering begins, namely in the problem specification. More precisely, I responded to a post claiming that the Hawkeye system was obviously more accurate than human referees, which I consider far from obvious and said so. Perhaps I could have talked about loss functions instead of overdetermined systems, but the gist of my point remains.

So, sure Federer can challenge Hawkeye's call, but the most you can say is what the probability of the ball being in or out is, or use the location of maximum likelihood. Maybe that should be reported on TV; Though it would confuse many watchers, maybe we can help them learn.

My understanding is that the umpire is the final arbiter. People are free to come up with a model and a methodology to compute their own best version of the head judge's decision, but unless the rules specify the methodology completely, it's merely an academic exercise with no intrinsic validity for the game.

But certainly, to talk about a system being more accurate merely because it uses engineering methodologies when the problem is not fully specified to begin with is ridiculous, and verges on technology worship.

Re:Anonymous Coward (4, Insightful)

Rakishi (759894) | more than 5 years ago | (#23988207)

Just because an umpire is the final word doesn't mean that a system can't do better than him, That is because the umpire is in fact he trying to measure something with a right/wrong answer. Specifically the umpire is the person who decides if event X happened or not which means that the goal is to see if X happened or not (not to see if the umpire thought X happened or not). The umpire isn't an inherent part of the rules but simply a judge to determine if something specified in a certain rule happened or not. As a result it's a perfectly valid problem to predict this event X in a method that is better (ie: lower misclassification) than the umpire. Finding the winner in a horse raise is one example of where technology is more accurate despite the rules likely having a person originally be the final judge.

One problem is that sometimes one can't measure the true answer in some way so there is no way to truly measure accuracy for a problem. That is a valid problem however I have no clue if that or something else is the actual problem you're so concerned about (your posts are as clear as black mud). In this case there probably are more accurate systems of measuring the truth although these take excessive money, time or preparation. One could for example cover the ground around the line with wet paint (or some such) and then check for breakages, or simply cover the ground with pressure sensors. The article implies they can measure the accuracy of the system compared to the true impact point which means that one can devise experiments in which one can measure the truth of where the ball lands.

Consistency, is more important than accuracy! (5, Insightful)

Anonymous Coward | more than 5 years ago | (#23987417)

Hawkeye and the like deliver a consistent result. It matters not at all if the ball is in by two Centimetres but is called out, provided that error is consistent throughtout the match.
If both players, or teams, are playing by the same margin of error, the contest is fair.
In cricket for instance, I would accept the computers call over umpires any day of the week!

Re:Consistency, is more important than accuracy! (3, Interesting)

Telvin_3d (855514) | more than 5 years ago | (#23987885)

That is only true if you assume the two players are making the same level of mistakes. If both players are regularly hitting the same shots witht he same amount of error, yes everything will even out. But let's say player A can consistently serve and hit the ball to within 2 cm of the out line, but player B often misjudges and goes 1 cm over. In this case, having player A's shots consistently called 'out' or player B's shots consistently called 'in' would be consistent, but it would also make a major change in the outcome of the match. And not the type of change that gets statistically evened out over games played.

Re:Consistency, is more important than accuracy! (0)

Anonymous Coward | more than 5 years ago | (#23988425)

Yes, but that same kind of discrepancy can result no matter how the line is defined. It could just as well be that the adjusted margin results in equitability where the original line would have lead to a favoritism of B over A.

Sports are not, and are not *supposed* to be, deterministic like chess. The excitement of the games is directly derivative from the minute perturbations which create inscrutable inconsistencies. Would you or else anyone watch games if every outcome was rehearsed beforehand? You want to see skill pitted against skill, but you also want to see chance and opportunity.

The important thing is that the players expect and get a fair game. "Mutually optimized" is not the same thing.

Hawkeye is rather redundant in cricket actually (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23988013)

I would accept the computers call over umpires any day of the week!

rubbish! then what would we all have to argue about afterwards in the Pub?

Actually Hawkeye is pretty poor for cricket.

Hawkeye cannot 'hear' a snick to give a 'caught behind'.

Hawkeye cannot (as far as I can tell) decide if a ball is caught or if the fielder let it slip through his fingers as he scoops it up the ground.

Hawkeye cannot tell if a Leg Bye or simple bye was scored.

I don't believe it can decide a 'wide' as there is no fixed length rule.

Hawkeye cannot tell if a ball was caught inside or outside the boundary.

Hawkeye cannot decide a run out.

Hakweye cannot tell if the ball hits the helmet often left behind the wicket keeper (5 runs)

Hakweye cannot even decide a no ball yet.

and so on

The only thing Hawkeye was/is used for is to decide an LBW decision which is a small percentage of 'outs' in a given game, and also to show where balls have been pitched for a given bowler.

Umpires in Cricket are going nowhere.

Re:Hawkeye is rather redundant in cricket actually (3, Interesting)

InfoHighwayRoadkill (454730) | more than 5 years ago | (#23988453)

Hawkeye cannot 'hear' a snick to give a 'caught behind'.

the tv companies have a "snickometer" which puts up an analysis of the sounds picked up by a microphone in the stumps. Its only used for commentary. The umpire makes the decision himself

Hawkeye cannot (as far as I can tell) decide if a ball is caught or if the fielder let it slip through his fingers as he scoops it up the ground.

A good tv replay can show this but as cricket is a gentleman's game it is up to the fielder making the catch to say if he thinks he made a clean catch. There have been instances in test cricket where fielders have called back batsmen after the umpire initially gave them out.

Hawkeye cannot tell if a Leg Bye or simple bye was scored.

No but the umpire can, hawkeye finds it very hard to spot a ball that deviates from its theoretical trajectory at the best of times

I don't believe it can decide a 'wide' as there is no fixed length rule.

you answered your own question there

Hawkeye cannot tell if a ball was caught inside or outside the boundary.

Thats because its looking at where the ball is being bowled in the middle of the playing area, it doesn't cover the whole of the field

Hawkeye cannot decide a run out.

That is because it is used to approximate the trajectory of the ball as its being bowled. Not when its being throw to the stumps and the relative position of the batsmans feet and bat. TV slo mo replays decide run outs (if the umpire is unsure) and are ideal for the purpose

Hakweye cannot tell if the ball hits the helmet often left behind the wicket keeper (5 runs)

the normally loud noise the ball makes when it hits the helmet and the ball shooting off in a different direction often suffices.

Hakweye cannot even decide a no ball yet.

As previously stated hawkeye doesnt watch peoples feet it watches the ball

The only thing Hawkeye was/is used for is to decide an LBW decision which is a small percentage of 'outs' in a given game, and also to show where balls have been pitched for a given bowler.

Its only used for this purpose for the tv commentators to have something to talk about. The margin of error and the problems with picking up balls that swing in the air or move off line from the pitch make it impossible to give an accurate ruling on an LBW.

Umpires in Cricket are going nowhere.

Its because 90% of decisions made in cricket are made by the umpires without needing back up that makes cricket a fascinating sport.

Re:Consistency, is more important than accuracy! (0)

Anonymous Coward | more than 5 years ago | (#23988037)

The same argument applies to a human analyzer. As long as they are making a genuine, objective use of their powers of discernment, nobody has anything more than a chance advantage.


The problem is the fans, who are *not* hired specifically to fulfill a role of objectivity, and are only too happy place every margin of doubt to the advantage of their favored team.

A machine is nice not for making good calls but for making indisputably impartial calls. Tell people a machine decided that crazy play at the end of the game and there stands to be less yelling, less fisticuffs, less rioting.

Ummm.... (2, Informative)

Khyber (864651) | more than 5 years ago | (#23987451)

I've seen in Hockey and Football broadcasts the ability to track the ball or puck realtime thru some system inside the playing piece (puck or football.) It seems to work pretty decent to me.

Re:Ummm.... (1)

oscartheduck (866357) | more than 5 years ago | (#23987555)

It's not fully accepted in football ultimately because of the issues of ensuring the system could withstand the impact of being kicked. Bear in mind that these tiny tennis balls reach speeds >100mph; a system has to be able to withstand that impact.

With hockey, I imagine that the puck is solid makes it simpler to ensure the integrity of the weighting of the puck after impact than it would be with a hollow tennis ball. Striking the relatively light and soft tennis ball to high speeds deforms the ball significantly and making sure that the radio system or whatever you pick remains both functioning and not affecting the balance of the ball would by a challenge.

Something else that comes to mind is that with a puck, you can scoop out puck material equal in weight to the broadcasting device. With a football, there is significant mass relative to the broadcasting device. A tennis ball is both light and small, meaning that it is difficult to make sure that the ball with the broadcasting device inside it is the same weight as the ball without the broadcasting device inside it. All of this comes down, really, to the fact that a tennis ball is small, hollow, light and struck with enormous power.

Is it even used in football? (1)

BadAnalogyGuy (945258) | more than 5 years ago | (#23987597)

You mention that a football with a tracking device needs to withstand being kicked. However the position of a football being kicked only matters when it is punted to the opposing team or when it is kicked for a field goal. Since the uprights make the latter situation simple to determine, the only real situation that the position of a kicked football matters is when a punt is returned near the sidelines. Luckily, a punt is not nearly as punishing on a football as a place kick, so a tracking device would no doubt be able to survive a punt.

Having exact positioning on a football in play is a huge help since refs are notoriously inaccurate, especially in the red zone when inches determine whether the football crosses the plane or falls short. But then again, griping about the refs is one of the aspects of the game.

Re:Ummm.... (1)

Spasemunki (63473) | more than 5 years ago | (#23987923)

It's certainly not used in NHL ice hockey. There used to be a "feature" in one of the network broadcasts of hockey games where they would add a "glow" around the puck to make it easier to follow on screen- this was done not using a tracker inside the puck, but was painted on digitally during the broadcast delay. Were a tracker to be put in the puck, the most significant use of it would be in deciding close goals where it isn't clear if the puck is over the line completely or not. No such technology is used in current NHL games. Making a resilient enough tracker to survive a slap shot would be a pretty significant challenge, though in hockey the consistency of the puck is not quite as significant as in cricket and other sports.

Call the ball Maverick (1)

stoolpigeon (454276) | more than 5 years ago | (#23987549)

As someone formerly involved in naval aviation, let me just say that the headline had me thoroughly confused for a few seconds. I thought the current fresnel lens optical systems were just fine.

Re:Call the ball Maverick (2, Funny)

Drathos (1092) | more than 5 years ago | (#23987921)

Army brat, myself, but my first thought on reading the headline was along similar lines.

I couldn't for the life of me think of a reason why a Hawkeye [wikipedia.org] would need a system to call the ball when every other pilot in the Navy has to do it with the ol' Mk. 1 Eyeball.

SOMETHING has to have the last word. (1)

The Bender (801382) | more than 5 years ago | (#23987551)

Nobody believes that Hawkeye is perfect, but the fact is that something has to have the last word on line calls, otherwise you can just look forward to hours of bickering.
Hawkeye gets that honor because it's the most accurate methodology currently available, and there is no doubt that it is completely impartial.

Don't try to make it sound like Wimbledon is making a terrible error, unless there is a better option available.

Re:SOMETHING has to have the last word. (0)

Anonymous Coward | more than 5 years ago | (#23987669)

"the fact is that something has to have the last word on line calls, otherwise you can just look forward to hours of bickering."

Tennis has survived this long without it...

Re:SOMETHING has to have the last word. (1)

The Bender (801382) | more than 5 years ago | (#23988025)

There was always a "last word". Before Hawkeye it was something called an "umpire".
And even then, oh, boy, do I remember the hours of absurd bickering and shouting that did nothing but detract from the actual game.

Are people really surprised? (0)

Anonymous Coward | more than 5 years ago | (#23987617)

Last year they showed a hawkeye image in Cricket that showed the ball going down the leg side. Now I know that one was false since leg stump was about five metres away from where it should be.

Not perfect, but definitely better than humans (1)

wolf12886 (1206182) | more than 5 years ago | (#23987651)

While the hawkeye certainly isn't perfect, I almost guarantee that its at least a few orders of magnitude more accurate than the human eye.

I don't see where the controversy is, the rules are simple, the ball is either in, or its out, theirs nothing left to human discretion, so nothing is lost by removing the human element.

Why this shouldn't be used whenever its available?

Professional Tennis (0)

Anonymous Coward | more than 5 years ago | (#23987659)

Those at the professional level of tennis such as Roger Federer (Current World #1) are completely against the Hawk-eye system. During the 2007 Wimbledon final against Nadal, Hawkeye called in a ball that appeared out to everyone, and Federer, a usually quiet guy actually told the umpire that it was "killing" him. The main problem here is that before Nadal "challanged" the call, it was presumed to be out, however Hawk-eye said differently, so the umpire was in complete agreement with the hawk-eye.

However, the actual problem is that because it is a machine, people trust it to be right 100% of time, even though on more than one occasion, that has been proven wrong, just like the previous "cyclops" system that used to be used to call if a serve had gone past the service line (white line in middle of court). That is why a lot of high ranked players don't agree with the hawk-eye system because it takes out the responsibility of the umpire to do his job, which is to make difficult line calls. The players have enough to worry about without needing to make line calls.

Summary Miseleading? No Wai! (3, Informative)

Zackbass (457384) | more than 5 years ago | (#23987691)

For those that didn't care to RTFA, the study is in the journal 'Public Understanding of Science' and (gooly who would have guessed) doesn't have anything to do with the summary written. They argue that uncertainties in measurement that normally don't impact the layman now need to be presented in an understandable way. They worry that people will wrongfully become too trusting of the systems that do have appreciable error in rare circumstances.

To inject my own opinion on the matter, I've had a little bit of experience with Vicon motion capture systems which appear to use similar technology to the Hawkeye system. The main problem with the system (when it works) isn't any problem with accuracy or precision. In fact, it's awesome. The problem is that the output is a little noisy and suffers from occasional jumps and hiccups. With proper filtering these are eliminated and the output is amazing. I can only imagine the problem is much easier when you're tracking a single ball rather than tens of tiny reflective makers such as with the Vicon system.

Re:Summary Miseleading? No Wai! (0)

Anonymous Coward | more than 5 years ago | (#23987907)

None of the articles I read were in the journal 'Public Understanding of Science' and they were both about the hawkey system. I know the summaries on slashdot are often misleading, but this isn't one of those times.

5mm? Pffff (1)

Tablizer (95088) | more than 5 years ago | (#23987703)

From article: The machine reported that the ball nicked the baseline by 1mm. However, Hawk-Eye Innovations also report that the average error of the machine is 3.6mm. If the Cardiff analysis is correct, the errors can be even larger than 3.6mm on some occasions. The International Tennis Federation, which tests the machines for use, would accept that Hawk-Eye had passed its test if it called the ball in by 1mm while the true position was out by 5mm.

Only 5mm? A human judge watching a fast-moving ball is NOT likely to do better than that.
     

Unanswered Questions (1)

felix9x (562120) | more than 5 years ago | (#23987813)

The articles don't really answer some key questions about how the system determines if the ball hit the line. I guess the details are in the study. Anybody read it out there?

The article says the system takes into account the compression of the ball and the fact that it skids on the ground. So the system tries to determine which parts of the ball actually touched the surface? Those must be some statistical calculations because I don't see how the system can actually see the ball compressing in 3d or tell which parts are actually making contact with the surface. So this is where we get the 3mm error. This brings a good point of the results during a game are within 3mm, should they be accepted.

Doesn't matter, as long as it is consistent (1)

topham (32406) | more than 5 years ago | (#23987833)

It doesn't matter how good, or even how bad the system is. As long as it cannot be shown to have a particular bias to a player, or the side of the court then it is automatically more fare than any existing judge.

period.

It doesn't matter if it is even out my 5 centimeters, never mind having an error rate of less than half a centimeter.

Re:Doesn't matter, as long as it is consistent (1)

mdmkolbe (944892) | more than 5 years ago | (#23988035)

An overly generous or overly tight judge will inherently favor either risk taking or conservative players. Thus there will always be a bias.

Now the same could be said of umpires, but with umpires everyone understands the sorts of errors they introduce. The idea that the umpire could be wrong is freely discussed and lived with. As the article points out, systems like HawkEye usually foster the public perception that they are the absolute truth when in fact every physical measurement has a statistical error in it. The article suggests that at-home viewers should be presented with some graphic that emphasizes that fact. (How is left up to the graphic arts people.)

In fact presenting the error rate could make the system more accepted by viewers. If the system is presented as perfect and the viewer sees an "obviously" wrong call, the viewer will conclude the system is broken and doesn't work at all. On the other hand, if the system is presented as having a small error rate, that "obviously" wrong call may be chalked up to the error rate and not some inherent brokenness of the system.

Refereeing is by many considered PART of the game. (4, Interesting)

SamP2 (1097897) | more than 5 years ago | (#23987993)

For a system like Hawkeye to be useful, it doesn't need to be perfect. It just needs to consistently be more accurate and impartial than a referee can be.

Nor is it required for the system to be fully automatic and autonomic. Referees can sit in front of their monitors, observe the cameras from all angles, with any time slowdown, and ultimately come to a better decision than a single person could make while the ball buzzes past them at Warp 9.

But from the social aspect, one has to decide on what is the referee's role, and what kind of influence, if any, do we want to delegate to a computer. And that depends on the type of sport.

For non-interactive sports such as sprinting, an automatic system works very efficiently, and most people readily accept it as better than a human time tracker.

But for many GAME sports (soccer and boxing come to mind) many people consider that a referee is PART of the game rather than just an observer. As long as a referee is comparatively competent, and acts in good faith, he has the authority to judge events in the game, and while mistakes are unavoidable, they are considered part of the game as well.

I'm not sure why this position is popular in these kinds of sports. Maybe it's the whole "humans should be judged by humans and not machines" aspect. Or maybe it's because having a Review Comission in front of CCTV monitors be judging every little move just feels too 1984-rish for spectators and players alike. Or maybe its something else. But this is a rather popular feeling.

Depending on the features and benchmarks of the electronic system, it may or may not be more accurate than a human observer. In the long term, a joint human-computer analysis system would be certainly more accurate than a human referee alone, especially in team or high-speed sports. But one has to ultimately question, whether, by gaining mathematical precision, we lost some human touch of sport that makes it enjoyable to play and watch. Fun can't be generated with a mathematical formula. And sometimes sitting on the couch and thinking "OMG that referee is such a dumbass" is part of the fun as well.

Transhumanism (0, Offtopic)

LS (57954) | more than 5 years ago | (#23988217)

It's funny how the slashdot community is so clear headed and informed about apparent impossibility of something as seemingly simple as putting a radio transmitter in a tennis ball, but then turn into pseudo religious nuts when it comes to uploading their souls into future AI networks. There's no consistency here.

No more slo-mo replays... (1, Insightful)

Grey Haired Luser (148205) | more than 5 years ago | (#23988335)

I'm sure you've all noticed that since the
introduction of Hawkeye, Networks have all
consistently stopped showing those wonderful
slo-mo replays, which, more often than not, would
simply show that the machine was in error.

The irony, of course, is that those replays are being
ignored just at the time when high speed camera technology
was getting good and cheap enough to be useful for umpiring.

A much better system is to have players be allowed
to ask an umpire for a video replay on demand, being able
to be wrong at most twice in a row.

Machine mistakes? (1)

Swoopy (101558) | more than 5 years ago | (#23988485)

If the point of the article is mainly to awaken public perception about machine referees,
then sentences like "but the fact that the machine can also make mistakes should always be clear." don't help at all.
People should have at least the awareness that as long as they aren't broken and function as designed,
machines don't make mistakes. (A mistake being the same as 'producing output not in accordance with the input and the design specification' in this case).
Machines however, CAN be inaccurate and often this inaccuracy is part of the design specification.
Equating inaccuracy with "making mistakes" is as bad in misinforming the public as it is to maintain the aura of perfection that surrounds sophisticated machinery now.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...