×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Comments

top

Profanity-Laced Academic Paper Exposes Scam Journal

raxx7 Re:remember this.... (134 comments)

You should dig a little deeper.

For the first link:
- The survey was made only among geo-scientists and engineers in the province of Alberta, Canada (where the oil industry is a major employer), it's a world wide survey of experts in climate.
- The actual results of the survey were "27.4% believe it is caused by primarily natural factors (natural variation, volcanoes, sunspots, lithosphere motions, etc.), 25.7% believe it is caused by primarily human factors (burning fossil fuels, changing land use, enhanced water evaporation due to irrigation), and 45.2% believe that climate change is caused by both human and natural factors".

Put simply, the article you linked in it outright lying.

4 days ago
top

Intel Announces Major Reorg To Combine Mobile and PC Divisions

raxx7 Re:AMD wins again (75 comments)

The problem with your post is that on newegg, 48 USD gets you an AMD A6-5400K while 46 USD gets you an Intel Celeron G1610.
The Celeron is actually a bit faster faster CPU and uses less power, although the A6 has a much faster GPU.

Intel is, essentially, in the enviable position of having chips which are faster, consume less power and are actually smaller, thus cheaper to manufacture.
And they are segmenting and slicing the market as they wish.
Yes, they offer top notch performance at a premium. And even more performance if you pay an arm and a leg.
But if you just want something cheap and competent, they got that too.

about two weeks ago
top

Intel Announces Major Reorg To Combine Mobile and PC Divisions

raxx7 Re:The problem is cost per mm of silicon (75 comments)

Intel still claims a reduction of cost per transistor in their 14 nm process.
Not sure whether TSMC claims the same for their 16FF process.

about two weeks ago
top

Intel Announces Major Reorg To Combine Mobile and PC Divisions

raxx7 Re:Can Apple Move to ARM on the Desktop? (75 comments)

Many caveats there.

First, Apple A8 cores are relatively big. They clearly are optimized for power and performance, not size and cost.
Not sure how much advantage they have over a Broadwell core.

Secondly, a higher performance design would require a major work. Current A8X chips are not multi-socket capable, so you can't just put more of them together. Compared to desktop Intel/AMD chips, they also have relatively weak memory systems (less than 50% of bandwidth), smaller caches and weaker GPUs.
So Apple would need to design a new SoC based on the A8 core, with more cores, more cache, faster memory interface and faster GPU.

Third, single thread performance still matters a lot. Not all things are multi-threaded and Amdahl's law is generally a bitch.
There have been many who have tried to compete by putting many weak CPUs core together and they have mostly failed.
That's why Intel has the market share it has. And that is why Apple went through the trouble and expense of designing their own CPU cores, which have arguably the best single thread performance of all available ARM cores.

about two weeks ago
top

Intel Announces Major Reorg To Combine Mobile and PC Divisions

raxx7 Re:Can Apple Move to ARM on the Desktop? (75 comments)

Short answer: an A8X won't work at 3 GHz, period.

Long answer:
All CPU, and other digital logic circuits, designs have a maximum target frequency at which they can operate correctly.
And by targeting a higher maximum frequency there is penalty to pay in area, power and performance. A well designed CPU targetting 3 GHz but running at 1.5 GHz will consume more power and perform worse than a well designed CPU targetting 1.5 GHz.

All available evidence and educated guessing points that Apple's CPUs are in fact targetting the frequency range in which they are shipping (~1.5 GHz) and there is no chance in hell they will work at much more than 2 GHz.

about two weeks ago
top

Japanese Maglev Train Hits 500kph

raxx7 Re:Boarding issues (418 comments)

Trains can be attacked by terrorists. In fact, they have.
A bomb was placed in a TGV luggage car in 1983.
Commuter trains in Madrid were bombed in 2004.
The London subway was bombed in 2005.
(side note: attacking subways and commuter trains provide a much bigger body count and disruption of daily life than long distance high speed trains)

But nobody added security screening to them.

A train riding on a track isn't the same as a plane flying 30,000 feet in the air.
Yes, they can be bombed and hijacked.
But, unlike a place, a hijacked train can't be flown into a building or into another country or into the sea (after running out of fuel).
When something goes wrong with a train, it stops.
Specially, trains running under automated safety systems, will stop even without human intervention.

about two weeks ago
top

Joey Hess Resigns From Debian

raxx7 Re:Unfortunate, but not surprising (450 comments)

Joey Hess did not propose such a vote, Ian Jackson did.
In fact, Joey Hess endorsed an alternative which basically states "we need no stinking GR".
https://www.debian.org/vote/2014/vote_003#amendmentproposerc

about three weeks ago
top

Fedora 21 Beta Released

raxx7 Re:beta blockers? what have they smoked? (56 comments)

You can try and use them. But a modern X desktop on anything but Xorg's Xserver is untested and unsupported. If it works at all.
Effectively, modern X desktops depend on Xorg's Xserver as the others (XFree86, Kdrive) lag too behind in development.

Regarding your first question.
A number of distributions (including RHEL6) used systemd-logind without running systemd as init.
However, to do so with a modern kernel, you need to implement systemd' cgroup proxy functionality.
Ubuntu has done just that, in the form of cgmanager, as they plan to use upstrart for a few more releases before migrating to systemd.

If you don't run systemd as init, you can use the good old dbus-daemon.

Like Xorg's Xserver, systemd provides features that developers/maintainers want to exploit.
And those developer maintainers are not willing to put in the extra work needed to achieve their goals without those features.
And although there's a very vocal outcry against systemd, it's not being translated into actual work to provide those features, only into complains and demands that others stop depending on systemd.

about three weeks ago
top

Ask Slashdot: How Useful Are DMARC and DKIM?

raxx7 Re:Sending e-mail reliably (139 comments)

I'm pretty sure I didn't write "don't use" all of that stuff.
I just wrote "beware".

The majority of ISPs, data center operators and hosting providers are pro-active or act quickly to keep their networks clean of spammers -- they don't want to end up on Spamhaus' shitlist.
I don't have any problems with our business internet connection, nor do I have any problem with my hobbies' hosting providers.
I do my bit to keep clean and they do their bit and it all works well.

But some operators are lazy and a minority actually try to cash in on being spammer friendly.
Beware of those. Because they'll end up getting you on Spamhaus' shitlist.

about three weeks ago
top

Ask Slashdot: How Useful Are DMARC and DKIM?

raxx7 Re:Sending e-mail reliably (139 comments)

Follow the RFCs. Don't leave your outgoing server poorly configured.
A number of e-mail servers check for strict adherence of RFCs, which many spambots fail.

Implement DKIM and DMARC, maybe SPF.
If you're using a mailing list, beware on how SFP/DKIM and DMARC can break it.

Don't send unwanted bulk e-mail. Really. DON'T SEND UNWANTED BULK E-MAIL even if you're asking for donations to UNICEF.

Don't let your outgoing e-mail server be used to send unwanted bulk e-mail. Don't leave it as an open relay, don't bounce messages, filter for e-mail outgoing unwanted bulk e-mail.
If you can't sanitize it's output, consider using a different outbound e-mail server for the important stuff.

Don't let your network be used to send unwanted bulk e-mail.
If you can't sanitize your network, place your outgoing e-mail server somewhere else.

Don't place your outgoing e-mail server in a domestic internet access. Most of they are permanently blacklisted.

Beware of your ISP/data center's network.
If they are not active in blocking spammers on their system/network, you can become blacklisted as a collateral damage.
Be specially beware of shared hosts.

about three weeks ago
top

Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

raxx7 Re:Are you sure? (863 comments)

You really should have tried to read and compreehend the my text and the nice links I posted.
The current voting is only on whether packages are allowed to depend on a given init system or not.
None of the proposals brought to vote is asking to change decision made in February: next Debian stable will use systemd by default.

about a month ago
top

Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

raxx7 Re:Are you sure? (863 comments)

Errr... Wayland depends on systemd even more than Gnome.

Well, technically neither Gnome nor Wayland nor much else depend on systemd.
They depend on features and documented stable interfaces, which just so happen to only be implemented by systemd and it's friends (eg, -logind) at the moment.

But until alternative implementations of these come up, Gnome and Wayland depend on systemd and it's not going to change.

about a month ago
top

Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

raxx7 Re:Are you sure? (863 comments)

Sure there is: systemd!
systemd replaces init, of course. It also replaces dbus-daemon.
systemd does require udevd, which you are probably already using anyway.
systemd does require journald, but it can feed logs to good ol' rsyslogd too.

Other than that, the systemd code tree provides replacements for a number of daemons, but you don't have to use them. You can keep using the old ones.

That said, systemd developers aren't writing new daemons just for fun (well, it's open source, so they kind of are).
Some of the old daemons have issues.
ntpd does the job, but it's a beast of a full feature NTP server, way much more than you want if you just want to keep your system's time synchrnonized to an NTP server. Which is one of the reasons we have OpenNTPd. But while it's much lighter, AFAIK OpenNTPd does not guarantee monotonic clock adjustments.
So... systemd-timesyncd is yet another attempt at an NTP daemon. Unlike ntpd, it only does the bare minimum required by a client but it does implement monotonic clock adjustments.

about a month ago
top

Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux

raxx7 Re:Are you sure? (863 comments)

You heard wrong.
The Debian Technical Comitee voted in February on making systemd the default init system for the next Debian stable release (Jessie).

==== RESOLUTION ====

We exercise our power to decide in cases of overlapping jurisdiction
(6.1.2) by asserting that the default init system for Linux
architectures in jessie should be systemd.

Should the project pass a General Resolution before the release of
"jessie" asserting a "position statement about issues of the day" on
init systems, that position replaces the outcome of this vote and is
adopted by the Technical Committee as its own decision.

==== END OF RESOLUTION ====
https://lists.debian.org/debian-devel-announce/2014/02/msg00005.html

A group of Debian members has now triggered a General Resolution voting process whereby software packages in Debian would not be allowed to depend on a particular init system, which a few exceptions.
https://www.debian.org/vote/2014/vote_003

However, it is very unlikely to pass.
It requires only support of 6, out of ~1000, Debian members to trigger this process and the first attempt, back in February, did not even achieve this.
This attempt did got it's 6 supporters, but it was also quickly met bet by 3 other counter proposals.

It's also not enforceable.
The Debian project can't force it's package maintainers to do the work of, eg, patching Gnome to work without systemd if needed -- they're volunteers and it's actually in the Debian constitution.
Much less they can force the upstream to do it .
The only thing the Debian project can do is remove software. Which is stupid, instead of promoting init system diversity, you're removing options for people who don't mind using a particular one.

Ultimately, this proposal was brought by a guy who has an anti-systemd mentality so deep that removing software from Debian just because it depends on systemd is acceptable for him.

about a month ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (226 comments)

Here some fact for you.
Fact: Almost no X application allows itself to be detached and reatached from the X server.
Fact: Applications' performance are getting worse and worse over the network.
Fact: a number of people have felt the need to develop and use middlemen to avoid such problems: NX and it's many clones, Xpra.
Fact: when I migrated out lab from CentOS5 to CentOS6, each and every user of Kate or Kile complained they were unusable from home and I had to teach them to use Xpra.
Fact: I often use Xpra even on the LAN, because of poor performance and the ability to detach/reattach applications.

And some more facts.
Fact: nobody depends on X network transparency. Lot's of people need to be able to run graphical applications remotely, but X network transparency is not the only way to do it, neither the best. Having to launch an Xpra server is more inconvenient than ssh -X but hardly a show stopper.
Fact: X network transparency will keep working under Wayland as long as XWayland is around and the toolkits don't remove X11 backends.

No, Wayland won't be network transparent. But there are ways to get the actual job done.
And you'll actually be able to ssh -X as long the as the applications don't drop support for X11?
So.. what's the problem?

about a month ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (226 comments)

At no point I have stated that Keith has stopped working on X.
I only pointed out an article which documents (some of) Keith's opinion on improvements brought by Wayland.

There's one thing you must understand about open source: you can't force others to do your bidding. Nobody can declare X or W to be the future and kill the other.
Open source in general, has never been a democracy but a "do"-acracry: those who do the work decide and the others are left only with the choice of using what has been done by those who did it. All you can do is do the work and see who chooses to use it.

Like everything else, X and Wayland will continue to work as long as enough people are willing to the put in work to make it work. No more, no less.
People will be forced to change from X to Wayland only and if the people who are currently doing the work required to put together X based Linux desktops decide they don't want to do it any more and nobody replaces them.
X's network transparency will continue to work as long as XWayland is kept up to date and as long as the toolkits keep an X backend and the applications take care not to break.

That said, looking at statements made by people doing said work, it looks likely that in the not so far future, you won't be able to run the latest version of Gnome (maybe even KDE) on top of an X display server.

On the other hand, XWayland will probably be kept up to date forever, as toolkits older than GTK3 and Qt5 probably will never get native Wayland backends.
And removal of X backends from other toolkits is something in a very distant and hazy future.

Finally, Daniel has always qualified what he meant by "network transparency is broken".
It has been broken by countless application writers, who only care and test about their applications in a local environment, where the SHM extension gives them immense bandwidth to communicate with the server and the latency is measured in few micro-seconds.
They've, unintentionally but increasingly, made the applications perform worse and worse in limited bandwidth/latency environments.
And they (application writers) have no intention in doubling back.

And this the second time this happens, by the way. Before Xrender, applications were also on the way of pushing so much data to render anti-aliased forms, that using them over the networks of a decade ago was also problematic.

So, heed Daniel's words: X network transparency is broken.
Your ability to use it with the latest applications is decreasing and there isn't anything the X or Wayland developers can do about it.
Because, again, you can't force the application developers to make sure it does work well over the network.
And this is why Wayland does not support network transparency: it adds a lot of complexity and brings no benefit, because most application developers are not willing to make sure the applications work well.

about a month ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (226 comments)

Because you obviously were mixing pixel scraping (used by RDP) with pixel pushing.
Quoting yourself: "Why? RDP also pushes pixels"

You were also obviously confusing using Xrender to push pixels and using Xrender in a way which leads to low bandwidth usage.
Again, quoting yourself The term "software rasterizer" does not necessarily imply that it does not use Xrender.

Using the X protocol for a screen scraper is a bad ideia:
a) it does not provide means to compress the image on the wire.
b) the display X server won't always keep the contents of the window. Eg, if you minimize and then restore a window, the display X server may require the contents of the window to be re-send, although they haven't been changed by the application.

I really can't fathom why you insist on using the X protocol for a pixel scraper. There isn't anything new here. We have a bunch of pixel scrapers around for a long time (VNC, RDP, Xpra and, to some degree, NX) and they've all found it useful to use a specific protocol.

Some people like RDP because despite being owned by MS (and subject to patents) the FreeRDP project provides a good implementation of the RDP protocol, which AFAIK, compares favourably with other pixel scrapers.

about a month ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (226 comments)

http://lwn.net/Articles/491509/

Keith Packard has been one of the lead developers of X for ~25 years.
Among his endless contributions, Xrender: http://keithp.com/~keithp/talks/usenix2001/xrender/

about a month ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (226 comments)

You're mixing up three very different methods.

Method 1 is plain dumb pixmap pushing. Applications mostly render client side and then they are constantly pushing the result as large, non-reusable, pixmaps to the server.
Of course, X supports this perfectly. Not only you can push pixmaps over the socket, X has a shared memory extension which lets you do this fast for the local cse.
However, while this works "correctly" over the network, it requires too much bandwidth to be useful. Eg, to push a 800x600 RGB window at 30 fps, you're pushing 43 MB/s. Trivial between two processes in your computer, not so trivial between your home and your work computer.

Method 2 is _clever and effective_ use of server side storage and rendering. The keywords here are _clever_ and _effective_. If your use of Xrender is not _clever and effective_, then it degrades to plain dumb pixmap pushing.
Using Xrender, for example, an application will upload pixmaps/gylphs on the server side. And then it will issue many small rendering commands which refer to the pixmaps/gylphs stored on the server.
Poster child example for Xrender, once the gylphs for all the required characters have been uploaded, Xrender allows applications to draw anti-aliased text using relatively little bandwidth between the client and server.
_Clever and effective_ use of core X primitives and Xrender is is what allows X applications to work over the network effectively.

Problem is that, increasingly, application developers are making less and less _clever and effective_ use of server side rendering. Largely, because as I pointed out before, method 1 works so much better for local clients.

Method 3 is pixel scraping. In this method, we have four instead of two actors. The pixel scraper server presents itself as a (local) X server to the applications but which renders not to a screen but to a frame buffer. The applications draw normally to this (local) X server, they have no idea about the remote display X server.
The pixel scraper server asynchronously scans the frame buffer for changes and sends them to the pixel scrapper client on the other side, usually compressed.The pixel scraper client receives the changes and draws them into the client's X server, which renders them into the screen.
Pixel scrapping also eliminates the latency problem, as the applications see a local server.

RDP works effectively by using pixel scrapping (along Windows drawing primitives too).

about a month ago

Submissions

raxx7 hasn't submitted any stories.

Journals

raxx7 has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?