Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Help ESR Stamp Out CVS and SVN In Our Lifetime

Forever Wondering Re:I am not going to convert (239 comments)

I've used sccs, rcs, cvs, svn, and git [all in production environments], spanning a 30 year period. git is easier to use than svn, and more powerful [features that git pioneered have been backported into svn, so you're getting the benefits, even if you don't realize it].

Ultimately, it's all about the merging, which is the premise that git is based on. See:
http://www.youtube.com/watch?v...
or
http://www.youtube.com/watch?v...

2 days ago
top

Help ESR Stamp Out CVS and SVN In Our Lifetime

Forever Wondering I've already done it (239 comments)

I've already created perl scripts to do this. I've already got the blob files and a full git repo for netbsd. Yes, it takes days for these to run but what's the big deal?

I did this because I needed the scripts to convert some of my old personal software from CVS/RCS to git. To debug the scripts, I thought that a true test would be to convert something massive like netbsd. I'm not a snob as I also configured for freebsd and openbsd but didn't run the scripts on those.

I did this on an old 8-core 2.5GHz 64 bit machine with 12GB ram [and 120 of swap space] and enough disk space. The full retail price on this was $3000 five years ago. The same specs can be had much cheaper today.

How many repos of what projects are going to be converted? 10? 100? 1000? Ultimately, there aren't enough projects to justify a machine for 100% usage for a five year period.

I tried to post the script here but various /. posting filters tripped over the 800+ lines. So, here's the top comment section along with the comments for the various functions:

# gtx/cvscvt.pm -- cvs to git conversion
#
#@+
# commands:
# rsync -- sync from remote CVS repo
# cvsinit -- initialize local CVS repository
# co -- checkout CVS source
# agent -- do conversion (e.g. create blob files)
# xz -- compress blob files
# import -- import blob files using git-fast-import
# clone -- clone the bare git repo into a "real" one
# git -- run git command
#
# symbols:
# cvscvt_topdir -- top directory
# cvscvt_module -- cvs module name
# cvscvt_agent -- conversion agent (e.g. cvs2git)
#
# cvshome -- top level for most cvs files
# cvscvt_srcdir -- cvs work directory
# cvsroot -- cvs root directory (CVSROOT)
# cvsroot_repo -- cvsroot/CVSROOT
# cvscvt_rsyncdir -- cvsroot/cvscvt_module
#
# cvscvt_blobdir -- directory to store agent output blobs
# cvscvt_tmpdir -- temp directory
# cvscvt_logdir -- directory for logfiles
#
# git_top -- git files top directory
# git_repo -- git repo *.git directory
# git_src -- git extraction directory
# git_dir -- git [final] repo directory
# git_work -- git [final] working directory
#
# cvscvt_xzlimit -- xz memory compression limit
#@-
# cvscvtinit -- initialize
# cvscvtcmd -- get list of commands
# _cvscvtcmd -- get list of commands
# cvscvtopt -- decode options
# cvscvtall -- do all import steps
# cvscvt_rsync -- sync with remote repo
# cvscvt_tar -- create tar archive
# cvscvt_cvsinit -- create real repository
# cvscvt_co -- do checkout
# cvscvt_agent -- invoke conversion agent [usually cvs2git]
# cvscvt_cvs2git -- run cvs2git command
# cvscvt_xz -- compress blob files
# _cvscvtxz -- show sizes
# cvscvt_import -- run git fast-import command
# cvscvt_clone -- clone the bare git repo into a "real" one
# cvscvt_git -- clone the bare git repo into a "real" one
# cvscvt_cvsps -- run cvsps command
# cvscvtblobs -- get blob files
# cvscvtshow -- show cvs2git results
# cvscvtshow_evtmsg -- get fake timestamp
# cvscvtshow_etarpt -- show amount remaining
# cvscvtshow_msg -- output a message
# cvscvteval -- set variable
# cvscvtexec -- show vxsystem error
# cvslogfile -- get logfile
# cvslogpivot -- rename logfile

2 days ago
top

Google Announces Motorola-Made Nexus 6 and HTC-Made Nexus 9

Forever Wondering Strangers with candy (201 comments)

The problem with Android Lollipop [for developers] is [still] the "android fragmentation" problem, which Google is trying to address with its Android One program. Lollipop has 5000 new API's, but developers have to program to the lowest common denominator, which is probably pre-4.0.

This is in contrast to Apple. Most devices get upgraded to the latest iOS in short order [3-6 mos]. IIRC, an author writing an iOS developers' book stripped all pre-iOS8 from it, because he felt that iOS8 was just so much better. Whether he's right or wrong doesn't matter as much as the fact that he can do it because of the iOS upgrade cycle. This makes iOS development much easier than Android development.

The latest Linux runs quite well on older devices. So should Android. This is just like a PC game that, during install, speed tests the machine and backs off on things like resolution, anti-aliasing, etc. to make it run smoothly.

Android One needs even more teeth:
- Vendors _must_ upgrade old devices [even at a loss] unless they can prove [to Google] that it won't run due to memory, etc.
- Vendors shouldn't force people to upgrade their device just to get the latest Android, just because the vendor wants to force this by refusing to upgrade Android on "last year's device".

I have a Galaxy S3 and Samsung has upgraded it every six months. I really like the fact that they're not forcing me to upgrade the device just to get the latest/best Android OS. As as result, they've got my loyalty. When I do [eventually] upgrade my device [at a time of my choosing], Samsung's firmware upgrade policy will be a major factor in my staying with them.

If Google can't get vendors to cooperate [even better] on this, it should offer backports of Lollipop [API's] to older versions via Google Play. This helps consumers with older devices, Android developers, Google, and even the [recalcitrant] vendors [even though they might vehemently disagree].

about a week ago
top

Password Security: Why the Horse Battery Staple Is Not Correct

Forever Wondering Re:Great! (546 comments)

I use the keystore approach. Each of my devices has a unique private/public key pair. Each device has the public keys of all the others. I disable password based login [except for physical/console login].

Shouldn't be too hard for websites to implement this. Shouldn't be too hard to allow multiple public keys (e.g. just add them to the per-user "authorized_keys" file). Default this off for users at start. But, allow it to be enabled on the account management page [with a place to paste in new public keys and menus to delete/modify the existing ones].

about two weeks ago
top

Ubisoft Claims CPU Specs a Limiting Factor In Assassin's Creed Unity On Consoles

Forever Wondering Re:Linked? (338 comments)

Thanks for the support. But, it seems my post is already going down in flames. Curious, since there have been many slashdot articles about Ubisoft's millitant attitute about [their] DRM. On such an article, it would probably get modded up. Or, perhaps, if I used a smiley face. Since I rarely get modded down for posts I make [and some are considerably more controversial], it makes me wonder if their aren't some astroturfing accounts at work. Sigh.

about two weeks ago
top

Ubisoft Claims CPU Specs a Limiting Factor In Assassin's Creed Unity On Consoles

Forever Wondering Re:Linked? (338 comments)

It's probably not the AI calculations related to gameplay, but Ubisoft's AI calculations related to their DRM that get highest priority in their games ...

about two weeks ago
top

Google Takes the Fight With Oracle To the Supreme Court

Forever Wondering Re:If Oracle wins, Bell Labs owns the world. (146 comments)

The AT&T copyrights were the genesis of POSIX. Nobody could create a workalike Un*x, so POSIX was originally a "clean room" reimplementation of the Un*x API's [libc, programs, et. al.]. POSIX now serves as a standard, but that wasn't its original purpose.

Because the POSIX methodology has been around for 30 years, it provides some precedent/defense for Google [estoppel].

If Oracle's argument prevails, this kills all Linux, *BSD [OSX] workalike OSes. Also, because ISO copyrights the C/C++ specs [to charge a fee to have a copy], this means that nobody could program in C/C++ without a license from ISO.

The Oracle/Google decision by the appellate court is tantamount to conferring patent protections for a copyright. That is, because Louis L'Amour copyrighted his western novels, nobody else can pen a western.

about two weeks ago
top

Object Oriented Linux Kernel With C++ Driver Support

Forever Wondering Re:Why do people still care about C++ for kernel d (365 comments)

placement new doesn't work without nullifying a few things. Automatic cleanup on scope exit doesn't work for locks in the kernel. See below ... Much more ...

placement new/delete are noexcept functions. But, they call std::terminate--not acceptable. The only thing that works is an alloc function that returns NULL (or (void *) -errno). Returning null is not fatal in the kernel. The caller must be able to deal with it (usually returning -ENOMEM). So, the [global] new/delete must be changed. Also, placement delete has problems [I've left off the backslashes for clarity]:

#define GETPTR(_ptr,_typ,_siz)
switch (_typ) {
case 0:
        _ptr = alloca(_siz);
        break;
case 1:
        _ptr = kmalloc(GFP_KERNEL,_siz);
        break;
case 2:
        _ptr = kmalloc(GFP_ATOMIC,_siz);
        break;
case 3:
        _ptr = slab_one(_siz);
        break;
case 4:
        _ptr = slab_two(_siz);
        break; // ...
}

void
myfnc(int typ)
{
        void *ptr;
        GETPTR(ptr,typ,23);
        class abc *x = new(ptr) abc(19,37);
}
At this point, a delete operator [even a placement version] has no idea which pool to release to because there's no way to pass typ to it. You might be able to create a contructor abc(typ,19,37) but that adds an extra member element to hold typ so the delete operator can get at it, but that's additional overhead/complexity that C doesn't have. It might be possible to make it work by casting typ to void* and using that as the pointer:
    class abc *x = new((void *) typ) abc(19,37);
and have the class specific new operator use GETPTR internally. I tested this and it works. However, I haven't yet been able to get the corresponding placement delete to work as a class specific overload [yet]. In trying to find the way, I came upon:
http://www.scs.stanford.edu/~d...
It's fairly detailed and lays out a [pretty strong] case against using the new operator [more eloquently than I could do here].

A lot of kernel code puts definitions in the usual place [top of function body] for C. In C++, this invokes the constructor, which is not what you want. The reason is that [say] 10 vars are defined. The function does a quick check on args and does a non-standard return -EINVAL. All that wasted create/destroy. This may be harmful if the constructors have side effects such as lock acquisition. Note that doing a [wasteful] lock followed by an immediate unlock [to satisfy having a destructor do lock cleanup] is a non-starter in the kernel [you'll never get such code checked in/signed off on]

So, you'd have to go through every kernel function by hand [there are 16.9 million lines of source code] and move the definitions down:
{
        struct foo x;
        if (bad_news)
                return -EINVAL; ...
}
{
        if (bad_news)
                return -EINVAL;
        struct foo x; ...
}

You can't put a lock release in a destructor because you'd need an extra member var that would have to be set/cleared when you acquire/release a lock. That's because the destructor has to have some way of knowing whether to suppress the lock release. So, you're adding an extra variable [that isn't needed in C] just to prevent an attempt to release a lock that was never acquired in the first place. More overhead and slower [and more complex] than its C counterpart.

In kernel functions, multiple different types of locks have to be acquired. Sometimes, it's:
get_lock_a();
x = find_object_in_a();
if (! x)
        goto release_a;
get_lock_b();
y = find_object_in_b();
if (! y)
        goto release_b; // do stuff
release_lock_b(); // do more stuff
release_lock_a(); // do even more stuff
return 0;
release_b:
release_lock_b();
release_a:
release_lock_a();
return -EINVAL;

Although you can create a goto-less version, sometimes the goto's are done deliberately for speed.

Another common snippet:
get_lock_a();
x = find_object_in_a();
if (! x)
        release_lock_a();
return x; // return with object list locked if we found one

Here's another one:
myfnc()
{
        if (in_interrupt()) {
                if (! trylock()) {
                        schedule_work(myfnc);
                        return;
                }
        }
        else {
                getlock();
        } // ...
        release_lock();
}

I've been writing linux device drivers for a living for the last 20 years. For 12 before that Unix. For 10 before that other OSes. So, I've had to read an awful lot of kernel code.

These are just the smallest of examples [junior grade--I was in a hurry] of what would be required. There are many more. Try a different approach. Download the kernel source code and start reading it. You'll find out a few things:

(1) C isn't nearly as messy or anemic as most C++ programmers think it is.

(2) See what expert level C programmers can actually do. The kernel is far cleaner that you probably suspect.

(3) Linus [and crew] don't want to use C++ merely because "they don't understand it". If it were truly beneficial in a kernel environment, they'd have switched long ago.

(4) Contrary to belief [on slashdot] Linus is a very reasonable guy. I've met him personally a number of years back. Ignore the bombast in postings. He only does it to counter some strong egos. But, it's completely done for shock effect to get stubborn [and wrong] programmers to do their jobs. Linus has had many discussions/battles where the others were saying "you just don't understand". This has usally been the gcc developers. In the end, he ends up being right [e.g. it really was a bug in the compiler and not a bug in Linus' understanding].

(5) The kernel is overhead to getting work done [an application]. Thus, it's designed to be fast--very fast. Other OSes have died because they forgot this. Mach, for example. [Clean] message passing microkernel architecture. Unfortunately, it [even after tweaks] was too slow for a production system and the project was sidelined.

(6) Linux is the basis for Android. Linux powers Google servers. Linux powers Facebook servers. Linux powers most zillion-core supercomputers. Considering all the diversity in arches, devices, etc. if Linux weren't already cleanly designed, it would have collapsed under the weight of maintaining all of the above.

(7) The kernel is "bare metal" programming. C is better suited to that than C++.

If you truly think the kernel will benefit from C++, read [a lot] of the code first [Repeating: 16.9 million lines of code]. Then join the project that started the discussion.

about two weeks ago
top

Object Oriented Linux Kernel With C++ Driver Support

Forever Wondering Re:Why do people still care about C++ for kernel d (365 comments)

The importance of this is underestimated. With a sanely written C++ program (merely sticking to the modern approaches) memory and resource leaks are a thing of the past, but you still get the completely predictable and deterministic resource management of C.

Unfortunately, you can't use any of that in the kernel [overloading create/destroy new/delete operators won't cut it]. Spinlocks, rwlocks, RCU, slab allocation, per cpu variables, explicit cache flush, memory fence operations, I/O device mappings, ISRs, tasklets, kmalloc vs vmalloc, deadlocks, livelocks, etc. are the issues a kernel programmer has to deal with. Nothing in C++ will help with these and some C++ constructs are actually a hindrance rather than a help.

For instance, copy constructors must be disabled. This was part of a proposal a few years back to make a C++ subset suitable for realtime/embedded. It isn't acceptable to have "x = y" invoke an unexpected amount of code simply because you inadvertantly invoked a copy constructor.

Kernels by their nature are messy. Anybody writing kernel code must be fully aware of the implications of doing something and must be aware of the state they're being called in. Abstraction just makes this job harder not easier.

For example, all kernel code must be compiled with -mno-red-zone because of the threat that any base code could receive an interrupt at any time [even between 2-3 machine instructions that comprise the red zone setup code].

Linux already does a pretty fair job of keeping things clean. If you don't believe that, actually go read the kernel source code. And, if something ends up being crufty, it gets cleaned up. Even if that means that some 100 or so modules need corresponding changes.

about three weeks ago
top

Object Oriented Linux Kernel With C++ Driver Support

Forever Wondering Re:Why do people still care about C++ for kernel d (365 comments)

Virtually all kernel functions return either NULL, true/false, or -errno for errors. No need nor desire for exceptions.

Just how would you do an exception inside an ISR, if you could even find a [credible/safe] way to implement them inside a kernel?

Uncaught exception === kernel panic?

about three weeks ago
top

Lost Opportunity? Windows 10 Has the Same Minimum PC Requirements As Vista

Forever Wondering Re:Citation needed (554 comments)

Overall, 64 bit has a 20% [or better] performance increase for most workloads. There are other factors other than just size of pointers.

Size of pointers is not the major factor in cache flush since most of the cache is taken up by data items and not pointers. These data items are more or less invariant across compilation mode.

64 bit compilers only use 64 bit fetch for non-pointers if you actually request them (e.g. long long). MS is the odd ball and defines a "long" to be 32 bits even in 64 bit mode [contrary to the compilation models used by everyone else]. "int" suffices for most data. Where it doesn't, one will [have to] code "long long" and that is invariant across 32/64, except that the 32 bit code will be slower [generating 2-3 instructions for each 64 bit one].

With x86_64, the first 6 arguments to a function are passed in registers and not on the stack (i.e. no wasteful push/pops for argument passing on entry/exit).

For a function that has a lot of automatic [stack] variables, in 32 bit, any non-trivial loop could spend a lot of time dumping a register to its stack frame solely for the purpose of making room for another variable that needs the register. This is register pressure and is considerably higher in 32 bit mode.

Once an address has been loaded in a register, access relative to that base register is identical speedwise between 64 and 32 bit.

64 bit has RIP-relative addressing which allows data to be addressed as small offset from the RIP [instruction pointer/program counter] register. Since it's relative to the RIP, two consecutive instructions that address the same data location will have slightly different offsets within each instruction.

You want a study? Try a google search on "performance 32 bit vs 64 bit".

Or, the easy reader version:
http://www.phoronix.com/scan.p...

about three weeks ago
top

Ask Slashdot: Software Issue Tracking Transparency - Good Or Bad?

Forever Wondering Re:Use an anology (159 comments)

It's not just that the color is pink. "Pink slime" is an meat by-product sometimes added to ground beef. See http://en.wikipedia.org/wiki/P....

about three weeks ago
top

Ask Slashdot: Software Issue Tracking Transparency - Good Or Bad?

Forever Wondering Re:Not so public disclosure (159 comments)

I agree.

Existing customers will already know about bugs as they're using the software. They'll want to know what's being done to fix it and will get some comfort if they can see the process (e.g. fix isn't yet out, but the problem is being diagnosed, test vectors generated, etc.).

In this particular case since some of the customers are 3rd party developers [programmers], their livelihood [selling their addons] depends on the core product being reliable. They absolutely want access. And, they can usually speed up the bug fix process with their [knowledgable] feedback.

Adding an NDA as a prereq to access to the issue tracker might be an idea. This prevents the info there from being used as ammo by a competitor.

Even if competitor buys the product [merely] to get access, they can't use it as marketing/sales weapon as that would violate the NDA.

If the competitor does not go for direct access [does not buy the product and is not subject to the NDA] but gets info leaked by an employee of a legit customer, then the competitor would be getting proprietary information [which might be considered industrial espionage, theft of trade secrets, etc.]

In either case, it weakens the competitor's incentive to try to use the information from the issue tracker.

Further, the issue tracker being accessible can be a marketing/sales selling point: "We stand behind what we sell and we're confident enough to have our bug tracker in the open to prove it. Why doesn't our competitor? What are they trying to hide?"

about three weeks ago
top

Software Patents Are Crumbling, Thanks To the Supreme Court

Forever Wondering Re:Double-edged sword (118 comments)

It already does, but even that can be abused.

In the "Oracle v. Google" trial [regarding Java APIs], Judge Alsup ruled for Google. Google had recreated their software from scratch using the API documents as a reference. That is, they did not use any Oracle/Sun code [except for a rangeCheck function that was less that 10 lines]. Alsup took great pains to write an informed opinion [even learning how to code a little].

However, the 9th circuit appeals court overturned this. One of the worst decisions ever. It's tantamount to saying that a copyright confers patent-like protections.

about a month ago
top

BBC: ISPs Should Assume VPN Users Are Pirates

Forever Wondering Re:So if I... (363 comments)

While the reason given may the true motive of the BBC, it certainly plays into the hands of GCHQ. Either way, it makes the BBC a shill [witting or unwitting] for the surveilance state being fostered by GCHQ.

I can just imagine a goodly fellow from GCHQ going over to the BBC: "You know these VPNs are difficult to spy on. But, the GCHQ can't just say that. Maybe you could help us. Just use some rubbish about copyright piracy. That'll do it"

about a month and a half ago
top

Some Core I7 5960X + X99 Motherboards Mysteriously Burning Up

Forever Wondering Re:Not just one mobo (102 comments)

The problem is not the fix - once you know the problem is power, it's trivial to fix.

ShanghaiBill is correct. Power is the first thing to check/suspect. In our case, the other engineering team members assumed the lead engineer had checked this--because it is so fundamental. He hadn't. He was almost fired for this.

The problem is identifying the root cause. Power problems are highly subtle - and usually very intermittent. The FPGAs may crash under heavy load, but it's one of the "phase of moon" bugs because you can feed in the same test patterns that crash it and it'll work the next time around.

We had no problem generating test vectors that caused the problem to occur once per hour.

And bugs that are impossible to replicate are the hardest ones to fix - especially if it's a new board that requires a new change to the RTL so you're not exactly sure if it's a hardware or software problem. Or even a compiler problem (since half the issues can easily be caused by bugs in the compiler).

We were quite confident that it was a hardware problem because both boards were 100% compatible software [device driver]-wise. My device drivers also would log all access to the board in realtime, so this would be dumped post-mortem and jointly analyzed by myself and the given logic designer for correctness against the hardware spec.

If this didn't match up, either my driver was wrong or the compiler had a bug. In this particular case, if we weren't tracking an intermittant, a hard fail error usually resulted in the logic designer fixing his logic. That is, before involving the designer, I prechecked the post-mortem dump and corrected the driver logic errors [if any] first.

So you have a problem, and the problem can be in the hardware, or the software (both the stuff you wrote, and the vendor's stuff), and even on the software running on top of the FPGAs. And you can't replicate it, either, so yeah, it's going to take a LOT of work to find the root cause.

One of my specialties is finding a way to add a stress mode to a driver or the driving application (e.g. diagnostic program) that reduces a "once a month" intermittant to "once an hour" or, in many cases, force the same fail in 10 seconds.

If you're lucky, you can isolate matters by running the current design on the previous hardware (provided that's possible) which narrows down the list of potential causes greatly. But because of the lead time in hardware, that's generally impossible - you just can't run the current software on the old hardware because too many things changed, just as you can't run the old software on the new hardware.

I've never had a problem writing device drivers that autodetect and work on any and all current and previous generations of hardware. If you had a driver routine that was "perform_xop" and what that had to do changes from v1 to v2 of the hardware, you just create a vtable and call vtable->perform_xop where this is populated with "v1_perform_xop" or "v2_perform_xop". Really, it's not that magical.

In our particular case, we had a hardware "capabilities" register [which I had the logic designers add]. So, our driver would also do things like:
    if (capreg & CAP_HAS_FEATURE_A) ...

about a month and a half ago
top

Some Core I7 5960X + X99 Motherboards Mysteriously Burning Up

Forever Wondering Re:Not just one mobo (102 comments)

Thanks for what you just said. Dead on and I couldn't agree more.

In our case, we were a small [startup] company, so we didn't have the resources to be second guessing each other. I was doing the device drivers, but I'm also 50% EE. When we found out what had happened, we were struck speechless that the first thing to check [IMO, yours, and the opinion of some of the other engineers] wasn't checked. Sigh.

about a month and a half ago

Submissions

Forever Wondering hasn't submitted any stories.

Journals

Forever Wondering has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?