ECC is actually more reliable, for its problem domain, than a triple voting system. The probability that you would arrive at a valid ECC code for bad data due to multiple bit flips is much lower than than the probability of two out of three systems voting wrong.

I'd say it all depends on the architecture of the systems in question, and there are a variety of possible outcomes. If you compare a bit-level TMR system to an ECC system (suppose 8 data bits and 1 ECC bit) in which all bits are equally susceptible to upset, then you clearly have a greater chance to accumulate 2 bits in error in the ECC system just because you've got 9 chances to upset 2 bits instead of only 3 chances. If you're flipping coins, you've got a better chance of seeing heads > twice in 9 flips than you do of seeing heads twice in 3 flips.

Suppose you've got a 10% chance of accumulating an error in a given bit per day, you end up with ~72% chance of 2 upsets in the ECC architecture (.9*.8, an approximation), vs 6% (0.3*0.2) in the TMR architecture. Granted, you've got to multiply that by 8 to to get the same amount of data storage as the ECC example. So at the end of the day in this notional case, you end up with 72% chance of lost data in the ECC architecture, and 48% chance of lost data in the TMR example. I've used imprecise approximations, but they demonstrate that statistically speaking, TMR can provide better protection than ECC in some cases.

Now, many ECC algorithms provide a single-error correct dual-error detect (SECDED) capability, which does confer a meaningful advantage vs TMR, where you can correct a single bit but you have no idea if you are actually seeing a two bit error. On the whole, though, you're still going to get 2x upsets far more often with ECC simply because there's a larger target area. So you end up with data loss more often with ECC, but at least you know that you've lost data.

It's also worth noting that if TMR is implemented on a byte level, where you compare the contents of 3 bytes, TMR looks a lot better because it's very unlikely that you'll upset the same bit on two different bytes. So effectively you do end up with something more like SECDED.

Anyhow, it's a complex topic with lots of potential for statistical hangups. In my experience, ECC is attractive primarily because it is so efficient compared to TMR - you're generally talking a 1-10% memory overhead to provide some very capable protection that will generally bring upsets down by a few orders of magnitude. With TMR, the overhead is 300%. However, TMR can be simple, and for situations where you have memory capacity, board space, and/or power to spare, it can be a superior option.

Last but not least, in both cases, the rate of uncorrectable errors is highly dependent on data retention time. You have to keep moving data in and out fast enough that the chance of accumulating multiple error bits in a data word is small. So there's a time component to the whole discussion as well.