Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:Nice... (132 comments)

Motorola did produce a HCMOS version of the 68000 and related cpus. e.g. the 68hc000 and friends, which I used extensively. HCMOS was billed as a 'fast' version of CMOS, trading off some current draw for speed. As people might remember, HCMOS pulls and pushes about the same (though the ground paths are rated higher). About 50 ohms to either rail, more capacitive than resistive so current draw ran more inline with the frequency and you could use 1M pull-down resistors on the tri-state busses and the logic was pretty bullet proof in terms of noise and reflections.

I really loved HCMOS, and hated TTL, but eventually advanced ttl beat it out (at least for the pin interface logic).

-Matt

yesterday
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:The original 68000 interrupts were inadequate (132 comments)

Interrupts worked fine. It was bus errors (i.e. for off-chip memory protection and/or mapping units) that were a problem. The 68010 fixed that particular issue if I recall. I'm guessing later 68008's also did but I dunno. Doesn't matter since he isn't running with any memory protection.

You could in fact run a real multi-tasking OS on the 68000. I was running one of my own design for my telemetry projects. It didn't have memory mapping but it did have memory protection via an external static ram, 8:1 selector, and some logic. It managed around 20-30 processes.

And, strangely enough, you could also run a RTOS because the 68000 had wonderful prioritized interrupts. Back then, of course, real time response was required for handling serial ports and things like that.

-Matt

yesterday
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Re:Hey, congratulations (132 comments)

Er, I meant 8:1 selector (the R/~W bit was fed into one of the select inputs). The function code logic was used to selectively enable/disable the memory protection unit, so supervisor accesses bypassed it while user accesses did not. Which is good because it wouldn't have been able to boot otherwise.

Another use for the FC logic is to speed up the auto-vector code. The 68K had wonderful asynchronous interrupt logic. You basically had 8 priority levels and you could feed your I/O chips into a simple 8:3 priority selector and feed the result into the interrupt priority level pins on the cpu. The 68K would then do an interrupt vector acquisition bus cycle to get the vector (or you could tell it to generate an internal vector). Every once in a while the async logic would screw up and we'd get an uninitialized interrupt vector but the code to deal with that was trivial, and since it was all level logic the hardware would sort it out soon enough and calculate the correct IPL to request from the bus.

The autovectors are slow, though... it was far better to generate vectors from a ram (if I remember right). The FC logic could be used to force the access to the ram or the eprom (since the address lines I think were all 1's except for three bits defining the priority level being fetched, or something like that).

In contrast, even to this day Intel STILL can't get their interrupt logic to work properly. Even the MSI-X logic is broken in a lot of chipsets. Yuch. So much ridiculous and unnecessary complexity with Intel interrupt handling with all the idiotic IOAPIC and LAPIC sub-processors with non-deterministic reaction times and serialization problems and other stupid stuff. The motorola interrupt logic was a dream in comparison.

-Matt

yesterday
top

Linux On a Motorola 68000 Solder-less Breadboard

m.dillon Hey, congratulations (132 comments)

Congratulations, you are now in a rare group indeed. But I gotta say, you haven't lived until you've programmed a 6502 directly in machine code. No assembler :-)

One of my telemetry systems which I built and designed 25+ years ago used the 68000 running at around 10 MHz (with a jumper for 20 MHz, which it could actually do though I didn't deploy it at those speeds). The coolest thing about it was that I had built a rudimentary memory protection unit using a static ram and an 8:2 selector. Any user mode accesses pumped the high address bits into the ram with 2 address bits going to the selector along with the R/~W bit. The result was gated into the bus error logic. The top three bits of the static ram were directly controlled by the kernel which allowed the kernel to 'cache' up to 8 process's worth of protection data in the ram at any given moment, so context switches were still very fast.

Before the 68030 and 68040 came out, Sun (I think) was running two 68000's in lockstep, one one cycle behind the other, in order to implement their own MMU. When a fault would occur, they bus-errored the lead chip and paused the second 'behind' chip so they could take the bus fault, resolve the mapping issue, and then resume the behind chip. Then the 68010 came along and fixed the bus error interrupt stacking bugs in the 68000, and the 68020 came along after that.

The 68030 could hold short loops in its chip logic with some tricks, despite not really having a cache. Unfortunately, the 68040's on-chip cache implementation was horrible and created all sorts of problems for implementers, and by then Intel chips were running much much faster.

When Motorola retired the 68K series some of their larger embedded users asked motorola to re-test the 68000 chip specs at a higher clock, since by then the HCMOS process could obviously run the chip much faster than the ~10-12 MHz that was speced. Motorola tested the HCMOS version of the chip to around 50-70 MHz or so. Such a nice 32-bit chip, I was really sorry to see Moto lose to Intel (mostly because Moto gave up).

-Matt

yesterday
top

HTML5: It's Already Everywhere, Even In Mobile

m.dillon I like it but... (133 comments)

but... operation is not even remotely smooth enough to compete with apps running with native graphics libraries. On Apple or Android. Still too sludgy, the browser implementation still does not implement sufficient concurrency to make it work well.

-Matt

5 days ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:If only history (326 comments)

You obviously have not had to deal with customer/user complaints about filesystem corruption. Because if you had, you'd rapidly come to the conclusion that the manpower required to service all of those complaints, let alone track down the cause (which is virtually impossible if TRIM is enabled), not to mention the bad rep that you get whether it is your fault or not, is simply not worth it.

-Matt

about a week ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:Easier solution (326 comments)

Complete nonsense. You seem to think that TRIM is some sort of magical command that makes SSDs work better. Spend a little more time on understanding the reality and your opinion will change.

Overprovisioning works extremely well. So well that there is really no reason to use TRIM, and plenty of reasons to avoid using it due to the many reasons stated by myself and a few others here.

Overprovisioning and TRIM do not have linear relationships. You don't get double the value by using both, or even double the value by, say, doubling the amount you overprovsion, or doubling the amount of free space the SSD thinks it has due to using TRIM. Beyond a certain point, overprovisioning and TRIMmed space will have only a minimal effect on actual wear because, as I've said already, a good SSD will do both static AND dynamic wear leveling. Dynamic wear leveling alone using TRIMmed or overprovisioned space, no matter how much space is available, only delays the inevitable need to relocate static blocks.

You can reduce the copying the SSD has to do, but beyond a certain point the amount of copying that remains will be so far under the radar compared to nominal filesystem operation (normal writes to files, etc), that it just won't have any more of an impact.

The non-deterministic nature of TRIM (depending heavily on how full your filesystem is and how fragmented it is), not to mention firmware bugs, filesystem bugs, the impossibility of diagnosing and tracking down corruption when it occurs, problems with feeding TRIMs through block managers which use large-block CRCs, and many other issues wind up creating huge complexities that increases your chance of hitting a bug somewhere that blows you up... it's just not a good trade-off. I will take the *highly* deterministic and generally bullet-proof overprovisioning method over TRIM any day of the week.

The only time I advocate using TRIM is when someone wants to wipe and repartition a SSD from scratch. That's it. I don't even advocate it for cleaning up the swap partition on reboot because then you can't recover crash dumps (and you might already be paging after that point so...). Though I suppose one could add some logic to TRIM the swap partition if no crash dump is present. That's about as far as I would go, though.

-Matt

about a week ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:Why? (326 comments)

I can't agree with your reasoning. The many computer companies that have lived and died over the years have primarily died because they were producing hardware that could not keep pace with developments in the industry.

Commodore .. which I was a developer for the Amiga (and a machine language programmer in my PET days), commodore died because AmigaOS was 100% dependent on Motorola and Motorola couldn't keep up with Intel, period.

NeXT died for the same reason. NeXT couldn't keep up with Intel and by the time Jobs caved in and went with his dual-architecture 68K/Intel binary format, it was too late. Also, depending on display-postscript for EVERYTHING was a huge mistake for NeXT and having 15+ year old OS tools for a weird Mach/BSD core that they never really updated messed them up too. I was a developer for the NeXT too.

In modern times, everyone runs on similarly powerful hardware and generally can stay up-to-date on the hardware front. OS makers die from a lack of apps or a lack of ease-of-use. Apple certainly does not suffer from either.

Linux and the BSDs are entirely dependent on a relatively common library of ~20,0000 to 30,000 or so (substantial) open source apps in order to stay relevant, but all suffer from the lack of a cohesive GUI that is powerful and easy to use. KDE, Gnome, the many other little window managers available... none hold a candle to either Apple or Windows. Unfortunately. At least as a consumer machine.

I have no problem running linux or a BSD as my workstation, as long as I am only doing programming or browsing. But if I want to play a *real* game or run *real* photo or video software (not something stupid like gimp which is virtually unusable)... then I have to shift my chair over to my Windows box or my refurbished Mac laptop. For that matter, if I want brainless printing which just works, I have to run it through my Windows box because CUPS is an over-engineered piece of crap that only works well on Macs... certainly not on linux or any of the BSDs.

-Matt

about a week ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Re:Easier solution (326 comments)

That is definitely incorrect. TRIM issuance is a filesystem-level operation or a disk partitioning level operation, not an OS-level operation. Due to ordering constraints, the OS cannot safely manage TRIM in the manner you suggest. A filesystem can, but honestly I don't know any filesystems which use TRIM that way. Smart SSD firmware can also delay TRIM in that matter but I don't know any that actually do. The filesystem will either issue the TRIM semi-synchronously or it will issue the TRIM as part of a batch cleanup. If it issues it at all. There is a great deal of complexity involved, because filesystems will often gang small block operations into larger blocks and you can't use TRIM on small blocks if you do that and still hope to have your CRC checks work deterministically on the larger block.

Also, remember that for SATA/AHCI (not SAS), the NCQ stuff is a pretty bad hack based on the command id and the original TRIM could not be issued without waiting for all other I/O to complete, then issuing it synchronously, then starting up the I/O again. For small erase sizes, writing zeros would be much, much faster if only the SSD spec specifically stated that writing zeros would have a TRIM effect. But it doesn't. Also, writing zeros is deterministic (which is good for finding bugs). TRIM is non-deterministic, which makes finding bugs almost impossible.

So, for many reasons, TRIM is about the worst implementation it is possible to have for a SSD. It's only real use is to completely wipe (repartition / blow-away) an SSDs entire storage. Other use cases just don't work as well as you might think and would be better served actually pushing zeros and specing the SSD to detect the zeros and TRIM the block, and if not to ensure that the sector still reads back as all zeros so as not to blow up large-block CRCs and other sanity checks.

-Matt

about a week ago
top

Apple Disables Trim Support On 3rd Party SSDs In OS X

m.dillon Easier solution (326 comments)

It isn't really true that SSD performance goes down by a whole lot if TRIM is not enabled. SSD performance and firmware has undergone radical improvements every year and people have come to the mistaken belief that enabling TRIM is responsible for most of the performance and wear leveling improvements.

TRIM has numerous problems, not the least of which being drives and/or filesystems which do not implement it properly. Because its use and effects can be seriously non-deterministic (even in a proper implementation), any bug in the drive firmware OR the filesystem in the use of TRIM can create serious corruption issues down the line when the drive actually decides to blow away some of the trimmed sectors. The TRIM command was badly conceived from the get-go.

The easiest and safest solution to getting 95% of the benefit of TRIM without actually using TRIM is to simply partition a factory fresh drive to leave a bit of unused space at the end... say another 5-10%. As long as it isn't written to, the drive will use that space as part of its dynamic wear leveling mechanic. As long as the drive also does static wear leveling (which nearly all will do these days), you wind up with nearly all the benefit of TRIM without having to actually use TRIM. TRIM was more important in the days where static wear leveling was not well implemented (or implemented at all). It is less useful these days.

-Matt

about a week ago
top

New NXP SoC Gives Android Its Apple Pay

m.dillon Re:So Android DOESN'T have an Apple Pay equivalent (122 comments)

CurrentC has no chance of becoming the primary anything. It uses QR codes for gods sakes... virtually nobody will use it, no matter how much the merchants try to push it. It's already DOA and it hasn't even been officially launched yet.

-Matt

about two weeks ago
top

New NXP SoC Gives Android Its Apple Pay

m.dillon Re:Why should I care? (122 comments)

My physical credit card (normal mag stripe) is compromised at least once a year and sometimes more often. I might not be liable for the fraud, but it is VERY inconvenient when it happens, the card gets locked out, plus it also causes my bank to start verifying more of the transactions which is just as inconvenient if I don't answer the text message quick enough and the card gets blocked for no reason.

It got so bad that around 2 years ago I got a second credit card so I could file it with trusted sites like Amazon and with charities that I donate to regularly and not have to give them all new card numbers every time my over-the-counter card got compromised. That's how bad it has been.

Chip-and-pin is less convenient than ApplePay. Tap-and-pay cards are nominally the same convenience as ApplePay but still have physical security issues. Mag stripe is clearly going to die soon... the data breaches are occurring so often now that not fixing it is no longer an option for merchants. They will have to go to NFC whether they like it or not.

I use ApplePay wherever I can now.

-Matt

about two weeks ago
top

New NXP SoC Gives Android Its Apple Pay

m.dillon NFC alone isn't enough (122 comments)

You need NFC (which many Android devices have had for years)... but you also need an actual secure chip (not a software emulation or intermediary), and the ability to initiate payment without having to turn on the phone or type in a security code (i.e. a fingerprint reader), and you have to be able to do it with the phone locked and turned off (meaning you need low power hardware to detect the NFC and wake the phone up). And then you need the OS integration to make it all work together seemlessly. And it has to not leak information to anyone except your bank which obviously needs to have the information anyway... and there is no smart phone app on the market other than ApplePay which can make that guarantee. Certainly not Google Wallet. Or CurrentC. Or anything else. And it's better than chip-and-pin and tap-to-pay which both have physical security issues (though they are much better than mag stripe).

Android is missing too many pieces and it will be at least 1-2 years before it has them all. And even then there will be such a huge percentage of *new* android phones that won't have all the pieces that it will only create mass confusion for the general consumer.

The reason Google Wallet has been a failure to-date is that it (and all other smartphone-based payment systems except ApplePay) is simply not convenient to use compared to swiping a credit card. The reason ApplePay became the #1 smartphone payment mechanism overnight is because it's utterly trivial and convenient to use.

It took me exactly 3 seconds at the local Whole Foods to pull out my phone, tap it with my finger on the finger print reader, and put it back in my pocket. It takes me about as long to swipe my card if I don't have to sign, but half the time I do have to sign so ApplePay immediately wins because I never have to sign (at least not so far).

Eventually all smart phones will do it the Apple way. For now, though, and for the next 1-2 years at a minimum, Apple is the only smartphone game in town that actually works well. Chip-and-pin and tap-to-pay cards work almost as well... they can even be more convenient in some situations, but they don't cover all the security bases.

-Matt

about two weeks ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:Git Is Not The Be All End All (245 comments)

What unbelievable nonsense, but I suppose I shouldn't expect too much from an anonymous coward. You don't even realize that you proved my point with your response.

-Matt

about a month ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:A lot of to-do about $700 (245 comments)

Over $900, and he will match the donations with his own funds so... that's definitely enough for a pretty nice machine. And with the slashdotting, probably a lot more now.

The bigger problem is likely network bandwidth to his home if he's actually trying to run the server at home. He'd need uplink and downlink bandwidth so if he doesn't have FIOS or Google Fiber, that will be a bottleneck.

-Matt

about a month ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:Git Is Not The Be All End All (245 comments)

A single point of failure is a big problem. The biggest advantage of a distributed system is that the main repo doesn't have to take a variable client load that might interfere with developer pushes. You can distribute the main repo to secondary servers and have the developers commit/push to the main repo, but all readers (including web services) can simply access the secondary servers. This works spectacularly well for us.

The second biggest advantage is that backups are completely free. If something breaks badly, a repo will be out there somewhere (and for readers one can simply fail-over to another secondary server or use a local copy).

For most open source projects... probably all open source projects frankly, and probably 90% of the in-house commercial projects, a distributed system will be far superior.

I think people underestimate just how much repo searching costs when one has a single distribution point. I remember the days when FreeBSD, NetBSD, and other CVS repos would be constantly overloaded due to the lack of a distributed solution. And the mirrors generally did not work well at all because cron jobs doing updates would invariably catch a mirror in the middle of an update and completely break the local copy. So users AND developers naturally gravitated to the original and subsequently overloaded it. SVN doesn't really solve that problem if you want to run actual repo commands, verses greping one particular version of the source.

That just isn't an issue with git. There are still lots of projects not using git, and I had a HUGE mess of cron jobs that had to try very hard to keep their cvs or other trees in sync without blowing up and requiring maintainance every few weeks. Fortunately most of those projects now run git mirrors, so we can supply local copies of the git repo and broken-out sources for many projects on our developer box that developers can grep through on our own I/O dime instead of on other project's I/O dime.

-Matt

about a month ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon Re:SVN and Git are not good for the same things (245 comments)

This isn't quite true. Git has no problem with large repos as long as the system ram and kernel caches can scale to the data footprint the basic git commands need to access them. However, git *DOES* have an issue with scaling to huge repos in general... it requires more I/O, certainly, and you can't easily operate on just a portion of a repo (a feature which I think Linus knows is needed). So repos which are well in excess of the RAM and OS resources required to do basic commands can present a problem. Google has precisely this problem and it is why they are unable to use git despite the number of employees who would like to.

Any system built for home or server use by a programmer/developer in the last 1-2 years is going to have at least 16G of ram. That can handle pretty big repos without missing a beat. I don't think there's much use complaining if you have a very old system with a tiny amount of ram, but you can ease your problems by using a SSD as a cache. And if you are talking about a large company... having the repo servers deal with very large git repos generally just requires ram (but client-side is still a problem).

And, certainly, I do not know a single open source project that has this problem that couldn't be solved with a measily 16G of ram.

-Matt

about a month ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

m.dillon It's not that big a deal (245 comments)

It's just that ESR has an old decrepit machine to do it on. A low-end Xeon w/16-32G of ECC ram and, most importantly, a nice SSD for the input data set, and a large HDD for the output (so as not to wear out the SSD), would do the job easily on repos far larger than 16GB. The IPS of those cpus is insane. Just one of our E3-1240v3 (haswell) blades can compile the entire FreeBSD ports repo from scratch in less than 24 hours.

For quiet, nothing fancy is really needed. These cpus run very cool, so you just get a big copper cooler (with a big variable but slow fan) and a case with a large (fixed, slow) 80mm input fan and a large (fixed slow) 80mm output fan and you won't hear a thing from the case.

-Matt

about a month ago
top

Ask Slashdot: Why Can't Google Block Spam In Gmail?

m.dillon Google seems to do a good job for me (265 comments)

Google filters out ~100-200 spams a day from my email box (which I universally forward all my domain mail through) and leaves me with (usually) only one or two that I have to specifically mark as spam. I've never been able to do better running my own spam filter.

-Matt

about a month ago
top

Ask Slashdot: VPN Setup To Improve Latency Over Multiple Connections?

m.dillon Mobile links (174 comments)

For mobile internet connections... for dual mobile internet connections. I haven't done that but I have used VPNs over mobile hotspots extensively. There is just no way to get low latency even over multiple mobile links. The main problem is that the bandwidth capabilities of the links are fluctuating all of the time, and if you try to dup the packets you will end up overloading one or the other link randomly as time progresses because the TCP protocol will get acks from the other link and thus not backoff as much as it should. An overloaded mobile link will drop out, POOF. Dead for a while.

For VPN over mobile links, the key is to NOT run the VPN on the mobile devices themselves. Instead, run it on a computer (laptop etc) that is connected to the mobile devices. Then use a standard link aggregation protocol with a ~1 second ping and a ~10 second timeout. You will not necessarily get better latency but it should solve the dropout problem... it will glitch for a few seconds when it fails over but the tcp connections will not be lost.

-Matt

about a month and a half ago

Submissions

m.dillon hasn't submitted any stories.

Journals

m.dillon has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?