Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office

MoonlessNights Sparse on Technical Details (123 comments)

I was interested in what the change-over was, which was causing the performance increase, and how old the existing system is. This information seems to be missing.

What is included actually sounds a little disappointing:
13x faster
12x as many CPUs
4x mass (3x "heavier")

I would have thought that there would be either a process win (more transistors per unit area and all that fun) or a technology win (switching to GPUs or other vector processors, for example) but it sounds like they are building something only marginally better per computational resource. I suppose that the biggest win is just in density (12x CPUs in 4x mass is pretty substantial) but I was hoping for a little more detail. Or, given the shift in focus toward power and cooling costs, what impact this change will have on the energy consumption over the old machine.

Then again, I suppose this isn't a technical publication so the headline is the closest we will get and it is more there to dazzle than explain.

2 days ago
top

Infected ATMs Give Away Millions of Dollars Without Credit Cards

MoonlessNights Re:These on XP? (83 comments)

That isn't an operating system flaw but a hardware flaw: loads data from device into memory and points the CPU at it.

What is actually surprising is that they don't use some kind of DRM-esque bootloader (much like you find in many phones) where it only boots an image with a matching signature.

about three weeks ago
top

The Growing Illusion of Single Player Gaming

MoonlessNights Re:Multiplayer = Devoid of Content (292 comments)

I definitely agree with this. Building a good game requires really good ideas (the game mechanics) and really great content (artwork and writing). These days, it seems to be common to sell a shell of a game and relying on multi-player to make it worth playing. Of course, to sound savvy, you just say you "crowd-sourced" it.

Many indie games have carved out a good niche for themselves by capitalizing on exceptionally creative game mechanics, which is definitely a great thing to see.

about a month and a half ago
top

The Growing Illusion of Single Player Gaming

MoonlessNights Re:Online only gives the illusion of accomplishmen (292 comments)

But in a truly single-player game, you are only cheating yourself, so you are probably just reducing your own fun and value.

If you want to cheat to "accomplish" things, then I don't really see the problem. It is just a different way of "playing" the game (albeit probably a less interesting one).

about a month and a half ago
top

Slashdot Talks WIth IBM Power Systems GM Doug Balog (Video)

MoonlessNights Re:Fully loaded 2U POWER8 for $2,000 USD, yes or n (36 comments)

Having used GCC and XLC on AIX, I can tell you that XLC is definitely the superior compiler.

The difference is less dramatic on Linux, but it is still there.

The difference between the platforms is caused by some interesting knowledge the compiler has of how the OS does some things (readable zero page being the most obvious example).

about 2 months ago
top

Choose Your Side On the Linux Divide

MoonlessNights False "new vs. old" dichotomies (826 comments)

(FYI: I haven't followed the systemd saga but I have noticed this fight in a growing number of places)

This seems to be a VERY common problem in the modern computing environment: arguments are reduced to ad hominem labels of their supporters where the proponents of "new" are just "kids fascinated by the trendy at the expense of stability" or other "why maintain it when I can write something better?" inexperienced people and the proponents of "old" are just "out of touch old-timers who are afraid of the unknown" or people "only interested in their own job security".

Of course, the reality is some bits of these straw men, combined with massive doses of reality. The truth is, both sides of the argument make more sense if they are reduced to actual concerns and interests, as opposed to "us versus them" camps.

The truth is that "change for change sake" is a dangerous position and the reality is that the "legacy" moniker is slowly changing from a negative term into something which means "has worked well for a long time".
Alternatively, sometimes new ideas are beneficial since they tend to think of current realities, as opposed to sometimes-extinct realities.

This whole notion of "choosing your side" doesn't help anyone since it isn't actually a division, but a conversation/argument. Sometimes stepping forward is correct while sometimes standing still is correct and neither approach is "always correct". Maybe we would choose our next steps better if we worked together to choose them instead of all lining up in our preassigned trenches.

about 2 months ago
top

Are Altcoins Undermining Bitcoin's Credibility?

MoonlessNights Do people confuse them? (267 comments)

I would assume that credibility pollution is not much of an issue since I don't think people confuse all cryptocurrencies as being "Bitcoin". However, I have no actual data so maybe they do. I would assume that the users of these more exotic currencies are a smaller group who know that these are all different.

The bigger concerns I have with these is that they seem rather redundant.

Also, does anyone _actually_ view Bitcoin as "an alternative to fiat currencies and central banks" or more as "a real solution to the problem kludged by PayPal"?

about 2 months ago
top

NVIDIAs 64-bit Tegra K1: The Ghost of Transmeta Rides Again, Out of Order

MoonlessNights Re:How does this account of caching? (125 comments)

I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem was related to constrained memory bandwidth causing their compiler and the application code to compete for the bus (which is a problem this design may also have unless their compiler is _really_ tight, which might be true for this low-ambition design) while the benefits of statically renaming registers and packing instructions into issue groups were still substantial.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

And those versions are few and far between, exception not the rule.

Not really. It depends on the environment and things like expected application running time. Things like Java, for example, use this kind of collector. They are used in production so they shouldn't be excluded from the discussion, thus meaning my statement is still correct.

Define a steady-state. Not every application has one. This is why real-time stuff doesn't do that - they allocate memory/blocks on the stack at the application (global) level. If you can load the application then everything the application will ever need is allocated. If you cannot load the application, then that's it.

I think we are "having an agreement". If something other than dynamic allocation can be used (the size of something is known at compile time, for example), then it should be allocated using a different mechanism.

From a security point-of-view, you need to be able to validate that a pointer is valid beyond whether or not it is NULL. You need to know that your application issued the pointer and that the data it points to is valid and within your application space. And this needs to be in real applications, not debug mode.

What do you mean? Under what circumstances is this kind of pointer validation required? It sounds like this is an attempt to detect other bugs, after-the-fact (reading uninitialized or over-written memory, for example).

Which is a major draw back for using a GC as it now has to crawl everything periodicially.

Whether this is a problem is really the core of this conversation. The problem is the pause time but the question is whether or not that is a real problem and whether other benefits exist to offset it, in the general case of your application.

So now you're adding indirect pointers for normal pointer usage...which again now means two calls for every pointer and now you've slowed down the whole application. Smart pointers do the same thing in a sense; as does the PIMPL design pattern - it can still be quite fast, but is still (provably) slower than directly using the pointers to start with.

I said nothing about indirect pointers at any point. The pointers are still directly to the memory being used. Managing the underlying memory slabs, directly, in no way invalidates this.

Except now you are again penalizing the performance by randomly moving the memory around at application run-time. So you are not just hitting the performance to remove unused memory, but to also "optimize" it. And in doing so you remove the ability of the application developer to run-time optimize the memory usage when necessary.

The application developer in managed runtimes has effectively no control over heap geometry. Technically, they aren't even allowed to think as object references as numbers since they can only compare them for direct equality/inequality.

Also, I am still not sure what you mean by "remove unused memory". Remember that the unit of work, in a managed heap, is either the number of live objects or the number of live bytes. "Unused" (or dead or fragmented) memory is not a cost factor.

These optimization opportunities do a great job of actually improving performance of the application (check the benchmarks - there is a surprising win in both throughput and horizontal scalability).

Seriously, GCs are probably one of the biggest hits for performance of applications on Android. It's one of the many reason that Java as a language is SLOW.

Can you substantiate that claim, since it sounds surprising. Their heaps aren't big enough to be seriously hurt by GC (unless they keep the heap right on the edge of full). Over all, Java is actually very FAST. The slowest part is generally VM bootstrap (just because it has a long path length and much of it can't be parallelized), followed by application bootstrap (which is not related to Java but many Java applications tend to be massive - application servers, Eclipse, etc). This win is some combination of GC memory optimizations but more so the "promise of JIT" which gives them a pretty serious win.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

Not quite. GC are always built on top of malloc+free, not side-stepping them.

This is incorrect. High performance GC implementations are typically built on top of the platform's virtual memory routines, directly (on POSIX, think mmap or shmem). This avoids the waste of "managing the memory manager" and also allows the GC fine-grained control over the heap. On some platforms, this also provides selective disclaim capabilities meaning that the GC actually will give pages back to the OS when contracting the heap, whereas free() wouldn't.

But that doesn't mean that your program should just crash because it failed; the program should degrade gracefully in those situations.

Agreed. To avoid this problem, the better pieces of software I have seen did no dynamic allocation once they reached a steady running state. This meant the failure states were easier to wrangle since you could only fail to allocate during bootstrap or when substantially changing running mode.

Of course, I'd go further and say the standard libs and operating systems should provide a method to validate pointers, but that's more a security concern and the hard part for that is figuring out if the object is valid and on the stack versus valid and on the heap.

The general approach is that a valid pointer is anything non-null. If you need to further introspect the memory mapping or sniff the heap to determine validity, you seriously need to re-think an algorithm. Debugging tools, of course, are exempt from this rule since they are often running on a known-bad address space.

Now in doing the reference counter in the GC, then yes - it becomes a lot more expensive to keep it and maintain it because it has no knowledge of the actual use of the pointer.

GCs do not store reference counts since they are completely different from this kind of tracking. They determine validity by reachability at GC time.

GC allocation will never be faster than non-GC allocation because it relies on non-GC allocation underneath. Anything the underlying libraries and kernel do, it does as well.

This is incorrect. High performance GCs manage their heap directly and can offer allocation routines based on this reality. The main reason why they can be faster is that they have the ability to move live objects at a later time so fragmentation doesn't need to be proactively avoided in allocation, which is normally what causes pain for malloc+free.

GCs will never improve cache performance as it is entirely unrelated and should not be randomly selecting objects. If anything it will decrease cache performance because it will be randomly hitting nodes (during its checks) that the application wants to keep dormant for a time.

This is incorrect. The GC can easily remove unused space between objects (by copying or compacting the adjacent live objects together). Further, given that a GC has complete visibility into the object graph, it can move "referencee" objects closer to the "referencer" objects (especially if there is only one). These 2 factors mean that the effective cache density is higher and that the likelihood of successive accesses being in cache is higher.

For further information regarding this point, take a look at the mutator performance characteristics of GCs which can run in either copying or mark+sweep modes. Paradoxically, the copying performance is generally much higher even though the GC is doing more actual work (this benefit can be eroded by very large objects, since copy cost is higher and relative locality benefits are smaller).

Of course, these statements are based on the assumption that this is a managed runtime, and not a native embeddable GC like Boehm.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

By that definition of "wasted", even a basic malloc+free system will waste massive amounts of memory since free rarely actually removes the heap space it is using from the process. Most of the time, someone using a GC configures it to avoid returning this memory, as well, since the environment is typically tuned against the maximum heap size, so giving it back would just mean it has to be requested again, later. Of course, if this is a concern, we are getting more into the debate of whether dynamic allocation should be used, at all (which is something I wish more people thought about - I also like you that you mentioned it can fail since so many people forget that).

I am not sure what you mean by "dynamic allocation time" in a GC and reference counting is NOT "sufficiently quick". Actual allocation cost, when using a GC-managed heap is generally incredibly cheap (because the allocator doesn't have to do heap balancing, etc). Reference counting can involve walking massive parts of the heap, and writing to it (sometimes many times). GC is generally very fast but, again, depends entirely on the number of live objects so it becomes more expensive the larger the live set becomes.

Whether or not real-time characteristics matter in every situation, or not, is really just determined by the maximum total time lost to the collector within a given window. A real-time collector can give you a guaranteed bound, no matter the size of the live set, but other collectors are typically fast enough for their environments: if you can GC the whole heap in 20 ms, it is probably ok for a game whereas if you can do it in 1 second, it is probably ok for an application server (although that would be an oddly long GC), etc. The question of whether or not an occasional outlier can be tolerated in order to reduce average pause time is another which depends on environment. Reference counting schemes are also subject to these limits but the bounds are harder to fix since the size of a now-dead object graph at any release point is not always constant at said release point.

While I agree with you that doing any real-time work in an environment with dynamic allocation is not a good idea (the only times I have done real-time work, dynamic allocation wasn't even supported on the platform - and that was never a problem), there is some amount of interest in things like real-time Java (hence JSR1) so we have real-time collectors. I have never done any real-time Java programming, but I have seen evidence that it works well.

What do you mean by "as the system needs better and better performance GCs become less and less useful"? A good GC will actually increase the performance of the application as it can improve cache density of the application's live set (not to mention memory-processor affinity).

We do seem to be having 2 conversations here:
1) How does GC compare to other dynamic allocation approaches
2) How does any dynamic allocation approach fare against static or stack-oriented allocation approaches. In this case, I think we both agree that avoiding the issue altogether is preferable, where possible.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

What do you mean by "wasted", in this case?

If you are generally referring to unpredictable pause times, then that is a real concern of GCs (and some general cases of reference counting and some cases of dynamic allocation). Of course, in the case of the GC, the pause time is a function of the live objects (and, in some cases, their size or topology) so I am not sure what you mean by "wasted".

Of course, there are GCs which offer predictable pauses but they are typically lower performance. They only matter in real-time environments so they are not often used.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

Where do you think the deallocation cost appears in the collector? Are you specifically referring to heap management, finalization, heap contraction, etc? The actual cost of the running the GC is a function of live objects, not dead ones or the number allocated since the last cycle.

about 3 months ago
top

Ask Slashdot: "Real" Computer Scientists vs. Modern Curriculum?

MoonlessNights Re:Real Programmers don't use GC (637 comments)

The deterministic nature of a destructor is quite useful but most of your other claims are incorrect or open questions.

Having a destructor act "automatically" (that is, without your code stating that it is happening) is a controversial idea (the trade-off between implicit and explicit operations will always be with us).

The overhead of malloc and free is massive compared to the equivalent operations in a managed runtime. Allocation is typically very cheap (since you don't need to manage the heap as a balancing data structure) and deallocation is effectively free, unless you need to be finalized, and there is no need to rebalance the heap. The real benefit of managing your own memory is that you can stack allocate, which is effectively free (although many managed runtimes can do this more aggressively, in most situations).

There is no problem with "wasted" memory backing dead objects between collections since the heap is typically of fixed size between collections (especially in situations like Java where the VM parameters are fixed at start-up time). Reference counting designs not only fail to collect non-trivial object graphs but also do not free memory immediately (although it is typically deterministic with response to call-return behaviour). Now, returning from a function could involve walking every object in the heap, perhaps many times, to update reference counts. GC only incurs cost at GC time, and that is proportional to the number of live objects, not dead objects (which typically dominate).

You also have avoided discussions of concurrent collection, incremental collection, and the massive cache benefits of object mobility as found in managed runtimes (compacting and copying collectors). These don't apply to things like Boehm, but do apply to things like Java.

about 3 months ago
top

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Re:Stored in cleartext? (126 comments)

So, you think that the problem is that they compromised the site in order to phish the user into installing a keylogger? That would actually explain how they could get the passwords, no matter how they are stored on the server, so it is an interesting interpretation of the article.

I still think that it is a harder sell since it requires tricking millions of users into installing an exploit and hoping that they all use the site. If you were able to pull this off, stealing their password for the target site would be the least valuable thing you would have stolen.

Of course, if you could get that much control over the actual site, you could probably mess with the login page to the point where you could effectively keylog in the JS, which would impact everyone who tried to log in.

The details are too sparse to really tell which approach was used, if the article is actually legitimate.

about 3 months ago
top

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Re:Stored in cleartext? (126 comments)

Yes, that is exactly what it does. That isn't a problem and calling it "through obscurity" isn't correct since you don't need to hide the algorithm for this to work.

Knowing the salting algorithm does not defeat this, at all (as you _can_ steal the salt). The point is that you would need to generate a rainbow table for each user since they each have unique salt. If you are going to do that, you might as well just try brute forcing them all as it would probably be faster.

about 3 months ago
top

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Re:What's one gotta do with the other? (126 comments)

Yeah, it is an odd article.

It seems like they are talking about 2 real problems:
1) SQL injection (which could be solved by only using prepared statements)
2) storing cleartext passwords on the server (which could be solved by storing as hash with per-user salt)
Both of these techniques have been old hat for around a decade so the real news is that so many sites could apparently be compromised this way (of course, the entire article sounds invented, so who knows if that is even true).

The "alleged weakness of username/password authentication" seems to be just a "conclusion" they invented for click-bate purposes.

I completely agree with you that their derivation makes no sense. These problems are independent of each other and neither directly implies the conclusion they want to state.

about 3 months ago
top

Alleged Massive Account and Password Seizure By Russian Group

MoonlessNights Stored in cleartext? (126 comments)

How was this even possible? Passwords should NEVER be something you can steal since they shouldn't actually be stored as clear text (or even encrypted, for that matter).

Hasn't it been common practice, for at least a decade, to store the passwords as a salted hash (using a unique salt for each user)?

You shouldn't be able to steal a password since the site shouldn't have it.

about 3 months ago
top

Sony Tosses the Sony Reader On the Scrap Heap

MoonlessNights Re:Loved me som PRS-505 (172 comments)

I was very pleased with my 505, as well. I didn't bother with their software (I think it was Windows-only and was mainly used to buy DRM-encumbered stuff from their own store) but just using it as USB mass storage worked well enough for my purposes.

I use it primarily for reading stuff from Project Gutenberg (since there is no DRM insanity) or taking other miscellaneous PDF and text content with me. The screen was quite good and sure beat reading any amount of text off of a glowing screen.

It is too bad that they are leaving the market but I can tell my use-cases aren't those used by the masses so I am not too surprised.

about 3 months ago
top

Vint Cerf on Why Programmers Don't Join the ACM

MoonlessNights Good for Deep Specialization (213 comments)

Some of the best developers I have worked with had active ACM memberships and they definitely did come across some exceptionally valuable papers through it.

I think that the reason why more people don't find the value is that the vast majority of software developers are either just code monkeys or have become the "Jack of all trades" type of technical leaders.

There are few opportunities to become a specialist in a single, very deep, area of expertise. You typically need to work for a big enough company who can justify such specialists and not have them constantly prodding you to "float around the company" (since there is an HR theory, currently in style, which states that you should encourage movement within an organization so people don't get bored - now you just alienate the people who like big, complex problems).

about 2 months ago

Submissions

MoonlessNights hasn't submitted any stories.

Journals

top

Heterogeneous Multi-Processing on big.LITTLE ARM

MoonlessNights MoonlessNights writes  |  about 2 months ago

I am really interested in the possibilities offered via Odroid-XU3. It is possibly the first general-use ARM machine using the big.LITTLE (this one is 4x 2GHz Cortex-A15 and 4x 1.4 GHz Cortex-A7 in one package) design which I have seen for public sale. Previous examples I have seen (other users of Exynos5 SoC) have not been easily adapted for uses beyond their specific deployments.

Unfortunately, I am having trouble finding a concise answer to some questions I have regarding how the scheduler even manages this situation (since schedulers have historically assumed homogeneous computational resources): (thread on Odroid forums). This seems like a really fascinating technical challenge so it would be interesting to hear more details of how this solution has been approached.

Seems like an impressive system but I do still hope we will start seeing these kinds of machines built with more memory, in the near future. 2G is a little on the tight side (of course, wanting to use one of these as a primary home system is probably an unusual use-case, anyway).

top

What does the Slashdot journal do?

MoonlessNights MoonlessNights writes  |  about 3 months ago

I am having trouble finding information explaining what this feature does. The UI makes it sound almost like it is connected to the submission flow but also seems to come across more like a minimalist blogging system.

Is it both: a blogging system which can be easily promoted to a front-page story, if others find it insightful?

If anyone can point me to some authoritative information, that would really help.

Slashdot Login

Need an Account?

Forgot your password?