×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

imgod2u Re:Few Million a Year is a BIG Stretch Goal (181 comments)

The stated ambition was "about a million cars" The article is incorrect in its quote. The liveblog (and video) are much more accurate.

One thing unique about Tesla's manufacturing is that the supply chain is materials. Most everything else is produced by Tesla themselves at the Fremont factory. Many questioned why this decision was made but there are many long-term benefits. When your supply chain is all raw materials, availability becomes much more predictable and your ability to influence the supply by pumping some money into a mine is far easier than say, getting a different company to shape up and manufacture more parts.

The only part of a Tesla that isn't produced at Fremont are the batteries and that's why the Gigafactory is coming online.

about two weeks ago
top

Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

imgod2u Re:Tell me it ain't so, Elon! (181 comments)

Maybe not but every politician represents the needs of his electorate and that electorate certainly want to keep their jobs.

I'm not saying it's a good reason but often times, technological changes can blind-side a good portion of the population and we have to consider that. Perhaps not stop progress but definitely slow down adoption to give the population time to find new jobs.

about two weeks ago
top

Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

imgod2u Re:Tell me it ain't so, Elon! (181 comments)

In such a situation, the franchise owner should've had enough foresight (especially given the vast amount of previous history) to add to the franchise agreement a non-compete clause. Free market and all.

Sometimes one side of said contract has too much power and we need the government to step in and make a law. The problem with that approach is that those laws often outlive their intent. The franchise laws to protect auto dealers were enacted in a day where the Big Three auto makers were the only business in town and continually abused that position. Nowadays they're scrambling for their lives.

The laws in place are no longer needed and now hamper innovation as it presents a major barrier to entry for upstart car companies -- something the people who wrote those laws never considered possible. Therefore they should be repealed.

about two weeks ago
top

How We'll Program 1000 Cores - and Get Linus Ranting, Again

imgod2u Re:Core of the article (449 comments)

How about graceful seg faults instead of program crashes? Obviously modern architectures don't really support such things but one can imagine a processor that detected bad pointers instead of causing the program to crash. In fact, each program could program or transaction even could program a pre-determined fault handler.

What'll happen is:

1. Thread A sets a "start of code snippet" and programs an address that has a fault handler.
2. Thread B starts its processing as well.
3. Thread A at some point tries to dereference a pointer at address X.
4. Thread B races ahead and deletes the pointer at address X.
5. Normally, in protected memory, the processor would throw a fit as thread A tries to access an illegal memory address.
6. Instead, the processor jumps to thread A's custom fault handler.
7. Thread A's fault handler sees "hey, my code snippet tried to access an illegal address and I, the thread, am not guaranteed to be thread safe". It then rolls back all of the work it's done up until the instruction that faulted.
8. Thread A tries again starting from 1. It could, at some point, decide to not try the thread unsafe method (if it faults too many times) and actually use the old mutex locking method.

The idea is that the majority of the time, thread A and thread B don't actually conflict. Or thread A wins the race. In those cases, you have a case of parallel computation speedup.

It's up to the programmer (or compiler, probably a JIT) to recognize when to exploit this by analyzing the algorithm and the likelihood of conflict. A JIT would probably use profiling information it gets in real time.

Nobody's saying this will replace 100% of all synchronization methods. But we don't need to. To get a speedup, you only need to technically replace 1 use case. But most likely, you can replace a lot (90%) of use cases.

about a month ago
top

How We'll Program 1000 Cores - and Get Linus Ranting, Again

imgod2u Re:Core of the article (449 comments)

A lot of transactional machines don't have locks. That's not to say they don't have any mutex-like structures altogether but rather, the sequences themselves are treated as locks, thus allowing a finer granularity than normal mutex algorithms.

about a month ago
top

How We'll Program 1000 Cores - and Get Linus Ranting, Again

imgod2u Re:Core of the article (449 comments)

The idea isn't that the computer ends up with an incorrect result. The idea is that the computer is designed to be fast at doing things in parallel with the occasional hiccup that will flag an error and re-run in the traditional slow method. How much of a window you can have for "screwing up" will determine how much performance you gain.

This is essentially the idea behind transactional memory: optimize for the common case where threads that would use a lock don't actually access the same byte (or page, or cacheline) of memory. Elide the lock (pretend it isn't there), have the two threads run in parallel and if they do happen to collide, roll back and re-run in the slow way.

We see this concept play out in many parts of hardware and software algorithms actually. Hell, TCP/IP is built on having packets freely distribute and possibly collide/drop with the idea that you can resend it. It ends up speeding up the common case: that packets make it to their destination along 1 path.

about a month ago
top

Google Fiber's Latest FCC Filing: Comcast's Nightmare Come To Life

imgod2u Re:One fiber to rule them... (221 comments)

REAL proponents of free market capitalism should have no problem with that idea. Those who do are those who either (A) don't understand that currently we have an oligopoly not a free market, or (B) want to protect their privileged position.

Or (C) think they should be able to sell faster access to some and or priority services to some.

The whole problem with net neutrality is that it wants everyone to be the same even though everyone doesn't want to be the same. Suppose your Aunt Marry only checks email and recipes on the internet so she decided to get the cheapest version of braodband she could. Now suppose netflix says I want to service her but she only has a 1.5 meg connection and needs a 4 meg connection to use our service effectively. So they pay to have her services increased for the packets that stream from their services so they do not have to convince Aunt Marry to not only pay the monthly rate to them, but to pay their provider more for faster service.

So now Aunt Marry can keep her slow service that she likes and still have netflix for those nights when the cats and cable TV just isn't enough. But Net Neutrality proponents say they don't want that. Aunt Marry will have to pony up all the money herself.

Except Title II isn't about net neutrality. Title II is about allowing more companies to access the physical lines so that there's competition. So that even if priority access is a thing the market wants, the ISP's won't get to overtly abuse their ability to have paid priority lanes. It's about encouraging more competition (similar to anti-trust laws) such that market forces can work.

about a month ago
top

Is the Tesla Model 3 Actually Going To Cost $50,000?

imgod2u Re:Still pretty affordable (393 comments)

Yes on the later question. In northern CA, they also offer "EV" plans where there are no more tiers. The base rate is higher (11cents/kWH after 11pm, peak of about 35cents/kWH during the day) than the tiered system (which starts at 5cents/kWH after 11pm with a peak of ~15cents/kWH, but grows exponentially).

But if you're charging an EV, you'll likely blow past the tiers anyway so having the EV plan works better. With a Model S, at least, you'll really only need to charge it at night and the software lets you schedule charging.

about 4 months ago
top

Is the Tesla Model 3 Actually Going To Cost $50,000?

imgod2u Re: Still pretty affordable (393 comments)

Well let's see:

Your food.
The gasoline you buy.
The other cars you could buy.
Your bank account insurance.
Your home insurance.

The list goes on and on. I don't think you live in the Somalian government free paradise you think you do.

about 4 months ago
top

Is the Tesla Model 3 Actually Going To Cost $50,000?

imgod2u Re: Still pretty affordable (393 comments)

That math really only works if you charge during peak hours (which most people don't) and compare it to the cost/mile driven of a Prius. Which is about as stretched of an assignment as you can make

about 4 months ago
top

Is the Tesla Model 3 Actually Going To Cost $50,000?

imgod2u Re: More importantly (393 comments)

LiON batteries can be recycled...

about 4 months ago
top

Microsoft Black Tuesday Patches Bring Blue Screens of Death

imgod2u Re:The suck, it burns .... (179 comments)

I think the criticism isn't so much that they're too responsive to consumers or not -- they obviously listen. The criticism is that there are so many holes to begin with and that their attempts to fix things that are obviously broken -- things that their competitors seem to be able to make work just fine -- often don't work or cause other problems. Knowing the Microsoft engineering culture, their stuff is mostly a patchwork of different groups not talking to each other. In the Windows API, there are something like 17 different representations of strings depending on which engineer/department wrote the code!

When you're disorganized like that in a giant company with a giant piece of software, it's easy to see how bugs can get out of hand.

about 6 months ago
top

Tesla Sending New Wall-Charger Adapters After Garage Fire

imgod2u Re: Is Tesla making cars... (195 comments)

People haven't stopped beta testing. Either in hardware or software. They have been quicker to release because the vast majority of software nowadays are done inside a sandbox (mobile apps, cloud servers, etc) rather than from scratch.

It's not like software or hardware back then was any more reliable. Office, OS9, Windows (all versions) have always been plagued with problems and one can argue they have fewer obvious bugs now than they did before - When's the last time you got a BSOD?

The counterbalance is that the consumer base is far far far larger now. Some of us who were at Intel at the height of the Pentium 4 were happy to have sold 40M units in a year. Mobile phone processors at qualcomm nowadays clear 400M/quarter.

If it seems like hardware and software bugs show up faster, it is because the userbase that uses and report such bugs (easy to do now via social media) is much much much larger.

1 year,20 days
top

Casting a Jaundiced Eye On AnTuTu Benchmark Claims Favoring Intel

imgod2u Re:Time for ARM to invest in GCC (82 comments)

Well, no. There are better compilers out there for ARM. Keil for one. More importantly though is the fact that real code that cares about performance won't just write a loop and let the compiler take care of it; they'll use optimized libraries (which both Intel and ARM provide).

Compiler features like auto-vectorization are neat and do improve spaghetti code performance somewhat but anyone really concerned with performance will take Intel's optimized libraries over them. So if we're going to compare performance that the end-user cares about, we'd use a benchmark that not only mimicked the functions we'd see in actual software but the libraries they use.

about a year and a half ago
top

Graphene-Based Image Sensor To Enhance Low-Light Photography

imgod2u Re: 1000 times better? (103 comments)

Exposure is exponential as well. So a camera with 2x exposure goes from 80% QE to 90% QE for example. The next 2x will get you to 95.

That may not seem like much but keep in mind that vision itself is logarithmic. So going from 98 to 99% QE gets you dramatically better results than, say 40% to 41%

about a year and a half ago
top

Game of Thrones The Most Pirated TV Show of the Season

imgod2u Re:Big shock... (312 comments)

When a large mass of people are willing to pay, but you choose to limit the market to a much smaller mass just so that you can charge more, that's the definition of artificial scarcity.

more than 2 years ago
top

Power-Saving Web Pages: Real Or Myth?

imgod2u OLED's (424 comments)

The idea is valid for all of the smartphones running OLED displays. OLED's take no power (or very little) to display a black pixel. It takes full power to display white.

more than 2 years ago
top

The F-35 Story

imgod2u Re:Only "troubled" if you're not Lockheed Martin (509 comments)

Problem is, they've already spent the money both on internal equipment and people's salaries. Even if you liquidate the company, you're not very likely to get even half of that money back and you'll put thousands out of the job. Granted it's probably jobs they should've never had, but still, politically, it's rather impossible to do.

more than 3 years ago
top

The F-35 Story

imgod2u Re:Only "troubled" if you're not Lockheed Martin (509 comments)

Aside from the problem of lobbyists, a big difference between DoD projects and private companies is that you have very few (hell, if any) private companies with enough cash reserves to do one of these projects. The private sector has very few "here's the money, now promise with sugar on top that you'll spend it wisely" type of contracts. Generally, you build something using your own cash (or venture capital/IPO funding) and then sell it. A ~$66 billion project is something even Apple can't fund out of pocket, let alone smaller manufacturing companies.

So we're left with the problem of how you can reign in costs if you have to give said contractor money just to start research. You can't be overzealous with auditing as that has negative effects on results; they'll be too busy keeping records and artificial metrics that they will spend less time doing the actual R&D. But on the other hand you can't just let engineers/managers run wild with an unlimited budget.

And at the end of it, if they're late, you can't just cut them off because that would mean you've thrown all the money you've given them away.

more than 3 years ago
top

Apple Hopes To Drop Samsung As Chip Supplier

imgod2u Re:So... (107 comments)

I doubt there's anything so sinister. One thing to note is that TSMC's 28nm process is ready now; chips will start mass production at the end of the year on it.

I don't think Samsung has their 32nm HKMG ready for the type of volume that Apple would need for A6. The A5 is already huge and would likely not fit in a phone. Apple's only chance of getting some more horsepower inside future iPhones without having to use the A4 again is to switch to a smaller process.

more than 3 years ago

Submissions

imgod2u hasn't submitted any stories.

Journals

imgod2u has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?