Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



US Senate Set To Vote On Whether Climate Change Is a Hoax

MoonlessNights Re:Vote on the negative (666 comments)

"I'm not not licking toads"

--Homer Simpson

about two weeks ago

Lies, Damn Lies, and Tech Diversity Statistics

MoonlessNights Re:SjwDot.org (335 comments)

... or that people are actually individuals and their genders are not their direct identities.

Sure, I am male but that only matters in one situation (due to my handicap of being straight) and we aren't doing that right now so I have no interest in your gender.

If you can help me work through the design of this idea without resorting to arguments relating to "where the braces will go", then I think this may be the beginning of a beautiful colleagueship.

about two weeks ago

In IT, Beware of Fad Versus Functional

MoonlessNights Re:Some practical examples (153 comments)

This is very true. While new ideas can be useful (or even great - everything was new at one point), the hype of the fad leads to tunnel vision where we only talk about how they will revolutionize everything.

The problem I have seen with the power of the fads is that they often become vague and redefined by everyone to fit what they are doing. "Cloud" is a great example: is it a common execution dialect, a remote storage system, or a flexible infrastructure virtualization system? "Agile" had the same problem a few years ago when everyone was doing it, even though their implementations were about as diverse as they were without it.

The "trendy" programming languages are frustrating since they are justified as being "great" because of their abilities to solve small problems with concise (or even terse) expressions. Since few people actually deal with large systems, they don't realize that most of these languages are really only good for prototyping or other small problems and big things are still written in C, C++, or Java for very good reasons.

It is why "legacy" has come to mean "actually works".

about a month and a half ago

Revisiting Open Source Social Networking Alternatives

MoonlessNights OpenAutonomy and the big list of alternatives (88 comments)

(Sorry for the shameless plug)

Personally, I created OpenAutonomy to solve this (and other) problems in an open, federated network (here is a video I did at FSOSS 2014 talking about this space). There is no centre of the network, nor is there much of a limitation in terms of what it can actually do.

That said, most of the approaches to solving this problem focus on social networking, specifically, and there are tons of them!

The problem is figuring out a way to explain the vision to a non-technical audience and get their interest in something new/different. The problems aren't technical, they are related to communication and marketing.

about 2 months ago

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office

MoonlessNights Sparse on Technical Details (125 comments)

I was interested in what the change-over was, which was causing the performance increase, and how old the existing system is. This information seems to be missing.

What is included actually sounds a little disappointing:
13x faster
12x as many CPUs
4x mass (3x "heavier")

I would have thought that there would be either a process win (more transistors per unit area and all that fun) or a technology win (switching to GPUs or other vector processors, for example) but it sounds like they are building something only marginally better per computational resource. I suppose that the biggest win is just in density (12x CPUs in 4x mass is pretty substantial) but I was hoping for a little more detail. Or, given the shift in focus toward power and cooling costs, what impact this change will have on the energy consumption over the old machine.

Then again, I suppose this isn't a technical publication so the headline is the closest we will get and it is more there to dazzle than explain.

about 3 months ago

Infected ATMs Give Away Millions of Dollars Without Credit Cards

MoonlessNights Re:These on XP? (83 comments)

That isn't an operating system flaw but a hardware flaw: loads data from device into memory and points the CPU at it.

What is actually surprising is that they don't use some kind of DRM-esque bootloader (much like you find in many phones) where it only boots an image with a matching signature.

about 4 months ago

The Growing Illusion of Single Player Gaming

MoonlessNights Re:Multiplayer = Devoid of Content (292 comments)

I definitely agree with this. Building a good game requires really good ideas (the game mechanics) and really great content (artwork and writing). These days, it seems to be common to sell a shell of a game and relying on multi-player to make it worth playing. Of course, to sound savvy, you just say you "crowd-sourced" it.

Many indie games have carved out a good niche for themselves by capitalizing on exceptionally creative game mechanics, which is definitely a great thing to see.

about 4 months ago

The Growing Illusion of Single Player Gaming

MoonlessNights Re:Online only gives the illusion of accomplishmen (292 comments)

But in a truly single-player game, you are only cheating yourself, so you are probably just reducing your own fun and value.

If you want to cheat to "accomplish" things, then I don't really see the problem. It is just a different way of "playing" the game (albeit probably a less interesting one).

about 4 months ago

Slashdot Talks WIth IBM Power Systems GM Doug Balog (Video)

MoonlessNights Re:Fully loaded 2U POWER8 for $2,000 USD, yes or n (36 comments)

Having used GCC and XLC on AIX, I can tell you that XLC is definitely the superior compiler.

The difference is less dramatic on Linux, but it is still there.

The difference between the platforms is caused by some interesting knowledge the compiler has of how the OS does some things (readable zero page being the most obvious example).

about 5 months ago

Choose Your Side On the Linux Divide

MoonlessNights False "new vs. old" dichotomies (826 comments)

(FYI: I haven't followed the systemd saga but I have noticed this fight in a growing number of places)

This seems to be a VERY common problem in the modern computing environment: arguments are reduced to ad hominem labels of their supporters where the proponents of "new" are just "kids fascinated by the trendy at the expense of stability" or other "why maintain it when I can write something better?" inexperienced people and the proponents of "old" are just "out of touch old-timers who are afraid of the unknown" or people "only interested in their own job security".

Of course, the reality is some bits of these straw men, combined with massive doses of reality. The truth is, both sides of the argument make more sense if they are reduced to actual concerns and interests, as opposed to "us versus them" camps.

The truth is that "change for change sake" is a dangerous position and the reality is that the "legacy" moniker is slowly changing from a negative term into something which means "has worked well for a long time".
Alternatively, sometimes new ideas are beneficial since they tend to think of current realities, as opposed to sometimes-extinct realities.

This whole notion of "choosing your side" doesn't help anyone since it isn't actually a division, but a conversation/argument. Sometimes stepping forward is correct while sometimes standing still is correct and neither approach is "always correct". Maybe we would choose our next steps better if we worked together to choose them instead of all lining up in our preassigned trenches.

about 5 months ago

Are Altcoins Undermining Bitcoin's Credibility?

MoonlessNights Do people confuse them? (267 comments)

I would assume that credibility pollution is not much of an issue since I don't think people confuse all cryptocurrencies as being "Bitcoin". However, I have no actual data so maybe they do. I would assume that the users of these more exotic currencies are a smaller group who know that these are all different.

The bigger concerns I have with these is that they seem rather redundant.

Also, does anyone _actually_ view Bitcoin as "an alternative to fiat currencies and central banks" or more as "a real solution to the problem kludged by PayPal"?

about 5 months ago

NVIDIAs 64-bit Tegra K1: The Ghost of Transmeta Rides Again, Out of Order

MoonlessNights Re:How does this account of caching? (125 comments)

I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem was related to constrained memory bandwidth causing their compiler and the application code to compete for the bus (which is a problem this design may also have unless their compiler is _really_ tight, which might be true for this low-ambition design) while the benefits of statically renaming registers and packing instructions into issue groups were still substantial.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

And those versions are few and far between, exception not the rule.

Not really. It depends on the environment and things like expected application running time. Things like Java, for example, use this kind of collector. They are used in production so they shouldn't be excluded from the discussion, thus meaning my statement is still correct.

Define a steady-state. Not every application has one. This is why real-time stuff doesn't do that - they allocate memory/blocks on the stack at the application (global) level. If you can load the application then everything the application will ever need is allocated. If you cannot load the application, then that's it.

I think we are "having an agreement". If something other than dynamic allocation can be used (the size of something is known at compile time, for example), then it should be allocated using a different mechanism.

From a security point-of-view, you need to be able to validate that a pointer is valid beyond whether or not it is NULL. You need to know that your application issued the pointer and that the data it points to is valid and within your application space. And this needs to be in real applications, not debug mode.

What do you mean? Under what circumstances is this kind of pointer validation required? It sounds like this is an attempt to detect other bugs, after-the-fact (reading uninitialized or over-written memory, for example).

Which is a major draw back for using a GC as it now has to crawl everything periodicially.

Whether this is a problem is really the core of this conversation. The problem is the pause time but the question is whether or not that is a real problem and whether other benefits exist to offset it, in the general case of your application.

So now you're adding indirect pointers for normal pointer usage...which again now means two calls for every pointer and now you've slowed down the whole application. Smart pointers do the same thing in a sense; as does the PIMPL design pattern - it can still be quite fast, but is still (provably) slower than directly using the pointers to start with.

I said nothing about indirect pointers at any point. The pointers are still directly to the memory being used. Managing the underlying memory slabs, directly, in no way invalidates this.

Except now you are again penalizing the performance by randomly moving the memory around at application run-time. So you are not just hitting the performance to remove unused memory, but to also "optimize" it. And in doing so you remove the ability of the application developer to run-time optimize the memory usage when necessary.

The application developer in managed runtimes has effectively no control over heap geometry. Technically, they aren't even allowed to think as object references as numbers since they can only compare them for direct equality/inequality.

Also, I am still not sure what you mean by "remove unused memory". Remember that the unit of work, in a managed heap, is either the number of live objects or the number of live bytes. "Unused" (or dead or fragmented) memory is not a cost factor.

These optimization opportunities do a great job of actually improving performance of the application (check the benchmarks - there is a surprising win in both throughput and horizontal scalability).

Seriously, GCs are probably one of the biggest hits for performance of applications on Android. It's one of the many reason that Java as a language is SLOW.

Can you substantiate that claim, since it sounds surprising. Their heaps aren't big enough to be seriously hurt by GC (unless they keep the heap right on the edge of full). Over all, Java is actually very FAST. The slowest part is generally VM bootstrap (just because it has a long path length and much of it can't be parallelized), followed by application bootstrap (which is not related to Java but many Java applications tend to be massive - application servers, Eclipse, etc). This win is some combination of GC memory optimizations but more so the "promise of JIT" which gives them a pretty serious win.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

Not quite. GC are always built on top of malloc+free, not side-stepping them.

This is incorrect. High performance GC implementations are typically built on top of the platform's virtual memory routines, directly (on POSIX, think mmap or shmem). This avoids the waste of "managing the memory manager" and also allows the GC fine-grained control over the heap. On some platforms, this also provides selective disclaim capabilities meaning that the GC actually will give pages back to the OS when contracting the heap, whereas free() wouldn't.

But that doesn't mean that your program should just crash because it failed; the program should degrade gracefully in those situations.

Agreed. To avoid this problem, the better pieces of software I have seen did no dynamic allocation once they reached a steady running state. This meant the failure states were easier to wrangle since you could only fail to allocate during bootstrap or when substantially changing running mode.

Of course, I'd go further and say the standard libs and operating systems should provide a method to validate pointers, but that's more a security concern and the hard part for that is figuring out if the object is valid and on the stack versus valid and on the heap.

The general approach is that a valid pointer is anything non-null. If you need to further introspect the memory mapping or sniff the heap to determine validity, you seriously need to re-think an algorithm. Debugging tools, of course, are exempt from this rule since they are often running on a known-bad address space.

Now in doing the reference counter in the GC, then yes - it becomes a lot more expensive to keep it and maintain it because it has no knowledge of the actual use of the pointer.

GCs do not store reference counts since they are completely different from this kind of tracking. They determine validity by reachability at GC time.

GC allocation will never be faster than non-GC allocation because it relies on non-GC allocation underneath. Anything the underlying libraries and kernel do, it does as well.

This is incorrect. High performance GCs manage their heap directly and can offer allocation routines based on this reality. The main reason why they can be faster is that they have the ability to move live objects at a later time so fragmentation doesn't need to be proactively avoided in allocation, which is normally what causes pain for malloc+free.

GCs will never improve cache performance as it is entirely unrelated and should not be randomly selecting objects. If anything it will decrease cache performance because it will be randomly hitting nodes (during its checks) that the application wants to keep dormant for a time.

This is incorrect. The GC can easily remove unused space between objects (by copying or compacting the adjacent live objects together). Further, given that a GC has complete visibility into the object graph, it can move "referencee" objects closer to the "referencer" objects (especially if there is only one). These 2 factors mean that the effective cache density is higher and that the likelihood of successive accesses being in cache is higher.

For further information regarding this point, take a look at the mutator performance characteristics of GCs which can run in either copying or mark+sweep modes. Paradoxically, the copying performance is generally much higher even though the GC is doing more actual work (this benefit can be eroded by very large objects, since copy cost is higher and relative locality benefits are smaller).

Of course, these statements are based on the assumption that this is a managed runtime, and not a native embeddable GC like Boehm.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

By that definition of "wasted", even a basic malloc+free system will waste massive amounts of memory since free rarely actually removes the heap space it is using from the process. Most of the time, someone using a GC configures it to avoid returning this memory, as well, since the environment is typically tuned against the maximum heap size, so giving it back would just mean it has to be requested again, later. Of course, if this is a concern, we are getting more into the debate of whether dynamic allocation should be used, at all (which is something I wish more people thought about - I also like you that you mentioned it can fail since so many people forget that).

I am not sure what you mean by "dynamic allocation time" in a GC and reference counting is NOT "sufficiently quick". Actual allocation cost, when using a GC-managed heap is generally incredibly cheap (because the allocator doesn't have to do heap balancing, etc). Reference counting can involve walking massive parts of the heap, and writing to it (sometimes many times). GC is generally very fast but, again, depends entirely on the number of live objects so it becomes more expensive the larger the live set becomes.

Whether or not real-time characteristics matter in every situation, or not, is really just determined by the maximum total time lost to the collector within a given window. A real-time collector can give you a guaranteed bound, no matter the size of the live set, but other collectors are typically fast enough for their environments: if you can GC the whole heap in 20 ms, it is probably ok for a game whereas if you can do it in 1 second, it is probably ok for an application server (although that would be an oddly long GC), etc. The question of whether or not an occasional outlier can be tolerated in order to reduce average pause time is another which depends on environment. Reference counting schemes are also subject to these limits but the bounds are harder to fix since the size of a now-dead object graph at any release point is not always constant at said release point.

While I agree with you that doing any real-time work in an environment with dynamic allocation is not a good idea (the only times I have done real-time work, dynamic allocation wasn't even supported on the platform - and that was never a problem), there is some amount of interest in things like real-time Java (hence JSR1) so we have real-time collectors. I have never done any real-time Java programming, but I have seen evidence that it works well.

What do you mean by "as the system needs better and better performance GCs become less and less useful"? A good GC will actually increase the performance of the application as it can improve cache density of the application's live set (not to mention memory-processor affinity).

We do seem to be having 2 conversations here:
1) How does GC compare to other dynamic allocation approaches
2) How does any dynamic allocation approach fare against static or stack-oriented allocation approaches. In this case, I think we both agree that avoiding the issue altogether is preferable, where possible.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

What do you mean by "wasted", in this case?

If you are generally referring to unpredictable pause times, then that is a real concern of GCs (and some general cases of reference counting and some cases of dynamic allocation). Of course, in the case of the GC, the pause time is a function of the live objects (and, in some cases, their size or topology) so I am not sure what you mean by "wasted".

Of course, there are GCs which offer predictable pauses but they are typically lower performance. They only matter in real-time environments so they are not often used.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

Where do you think the deallocation cost appears in the collector? Are you specifically referring to heap management, finalization, heap contraction, etc? The actual cost of the running the GC is a function of live objects, not dead ones or the number allocated since the last cycle.

about 6 months ago

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

The deterministic nature of a destructor is quite useful but most of your other claims are incorrect or open questions.

Having a destructor act "automatically" (that is, without your code stating that it is happening) is a controversial idea (the trade-off between implicit and explicit operations will always be with us).

The overhead of malloc and free is massive compared to the equivalent operations in a managed runtime. Allocation is typically very cheap (since you don't need to manage the heap as a balancing data structure) and deallocation is effectively free, unless you need to be finalized, and there is no need to rebalance the heap. The real benefit of managing your own memory is that you can stack allocate, which is effectively free (although many managed runtimes can do this more aggressively, in most situations).

There is no problem with "wasted" memory backing dead objects between collections since the heap is typically of fixed size between collections (especially in situations like Java where the VM parameters are fixed at start-up time). Reference counting designs not only fail to collect non-trivial object graphs but also do not free memory immediately (although it is typically deterministic with response to call-return behaviour). Now, returning from a function could involve walking every object in the heap, perhaps many times, to update reference counts. GC only incurs cost at GC time, and that is proportional to the number of live objects, not dead objects (which typically dominate).

You also have avoided discussions of concurrent collection, incremental collection, and the massive cache benefits of object mobility as found in managed runtimes (compacting and copying collectors). These don't apply to things like Boehm, but do apply to things like Java.

about 6 months ago

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Re:Stored in cleartext? (126 comments)

So, you think that the problem is that they compromised the site in order to phish the user into installing a keylogger? That would actually explain how they could get the passwords, no matter how they are stored on the server, so it is an interesting interpretation of the article.

I still think that it is a harder sell since it requires tricking millions of users into installing an exploit and hoping that they all use the site. If you were able to pull this off, stealing their password for the target site would be the least valuable thing you would have stolen.

Of course, if you could get that much control over the actual site, you could probably mess with the login page to the point where you could effectively keylog in the JS, which would impact everyone who tried to log in.

The details are too sparse to really tell which approach was used, if the article is actually legitimate.

about 6 months ago

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Re:Stored in cleartext? (126 comments)

Yes, that is exactly what it does. That isn't a problem and calling it "through obscurity" isn't correct since you don't need to hide the algorithm for this to work.

Knowing the salting algorithm does not defeat this, at all (as you _can_ steal the salt). The point is that you would need to generate a rainbow table for each user since they each have unique salt. If you are going to do that, you might as well just try brute forcing them all as it would probably be faster.

about 6 months ago


MoonlessNights hasn't submitted any stories.



Heterogeneous Multi-Processing on big.LITTLE ARM

MoonlessNights MoonlessNights writes  |  about 5 months ago

I am really interested in the possibilities offered via Odroid-XU3. It is possibly the first general-use ARM machine using the big.LITTLE (this one is 4x 2GHz Cortex-A15 and 4x 1.4 GHz Cortex-A7 in one package) design which I have seen for public sale. Previous examples I have seen (other users of Exynos5 SoC) have not been easily adapted for uses beyond their specific deployments.

Unfortunately, I am having trouble finding a concise answer to some questions I have regarding how the scheduler even manages this situation (since schedulers have historically assumed homogeneous computational resources): (thread on Odroid forums). This seems like a really fascinating technical challenge so it would be interesting to hear more details of how this solution has been approached.

Seems like an impressive system but I do still hope we will start seeing these kinds of machines built with more memory, in the near future. 2G is a little on the tight side (of course, wanting to use one of these as a primary home system is probably an unusual use-case, anyway).


What does the Slashdot journal do?

MoonlessNights MoonlessNights writes  |  about 6 months ago

I am having trouble finding information explaining what this feature does. The UI makes it sound almost like it is connected to the submission flow but also seems to come across more like a minimalist blogging system.

Is it both: a blogging system which can be easily promoted to a front-page story, if others find it insightful?

If anyone can point me to some authoritative information, that would really help.

Slashdot Login

Need an Account?

Forgot your password?