Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Comments

top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

Here some fact for you.
Fact: Almost no X application allows itself to be detached and reatached from the X server.
Fact: Applications' performance are getting worse and worse over the network.
Fact: a number of people have felt the need to develop and use middlemen to avoid such problems: NX and it's many clones, Xpra.
Fact: when I migrated out lab from CentOS5 to CentOS6, each and every user of Kate or Kile complained they were unusable from home and I had to teach them to use Xpra.
Fact: I often use Xpra even on the LAN, because of poor performance and the ability to detach/reattach applications.

And some more facts.
Fact: nobody depends on X network transparency. Lot's of people need to be able to run graphical applications remotely, but X network transparency is not the only way to do it, neither the best. Having to launch an Xpra server is more inconvenient than ssh -X but hardly a show stopper.
Fact: X network transparency will keep working under Wayland as long as XWayland is around and the toolkits don't remove X11 backends.

No, Wayland won't be network transparent. But there are ways to get the actual job done.
And you'll actually be able to ssh -X as long the as the applications don't drop support for X11?
So.. what's the problem?

4 days ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

At no point I have stated that Keith has stopped working on X.
I only pointed out an article which documents (some of) Keith's opinion on improvements brought by Wayland.

There's one thing you must understand about open source: you can't force others to do your bidding. Nobody can declare X or W to be the future and kill the other.
Open source in general, has never been a democracy but a "do"-acracry: those who do the work decide and the others are left only with the choice of using what has been done by those who did it. All you can do is do the work and see who chooses to use it.

Like everything else, X and Wayland will continue to work as long as enough people are willing to the put in work to make it work. No more, no less.
People will be forced to change from X to Wayland only and if the people who are currently doing the work required to put together X based Linux desktops decide they don't want to do it any more and nobody replaces them.
X's network transparency will continue to work as long as XWayland is kept up to date and as long as the toolkits keep an X backend and the applications take care not to break.

That said, looking at statements made by people doing said work, it looks likely that in the not so far future, you won't be able to run the latest version of Gnome (maybe even KDE) on top of an X display server.

On the other hand, XWayland will probably be kept up to date forever, as toolkits older than GTK3 and Qt5 probably will never get native Wayland backends.
And removal of X backends from other toolkits is something in a very distant and hazy future.

Finally, Daniel has always qualified what he meant by "network transparency is broken".
It has been broken by countless application writers, who only care and test about their applications in a local environment, where the SHM extension gives them immense bandwidth to communicate with the server and the latency is measured in few micro-seconds.
They've, unintentionally but increasingly, made the applications perform worse and worse in limited bandwidth/latency environments.
And they (application writers) have no intention in doubling back.

And this the second time this happens, by the way. Before Xrender, applications were also on the way of pushing so much data to render anti-aliased forms, that using them over the networks of a decade ago was also problematic.

So, heed Daniel's words: X network transparency is broken.
Your ability to use it with the latest applications is decreasing and there isn't anything the X or Wayland developers can do about it.
Because, again, you can't force the application developers to make sure it does work well over the network.
And this is why Wayland does not support network transparency: it adds a lot of complexity and brings no benefit, because most application developers are not willing to make sure the applications work well.

4 days ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

Because you obviously were mixing pixel scraping (used by RDP) with pixel pushing.
Quoting yourself: "Why? RDP also pushes pixels"

You were also obviously confusing using Xrender to push pixels and using Xrender in a way which leads to low bandwidth usage.
Again, quoting yourself The term "software rasterizer" does not necessarily imply that it does not use Xrender.

Using the X protocol for a screen scraper is a bad ideia:
a) it does not provide means to compress the image on the wire.
b) the display X server won't always keep the contents of the window. Eg, if you minimize and then restore a window, the display X server may require the contents of the window to be re-send, although they haven't been changed by the application.

I really can't fathom why you insist on using the X protocol for a pixel scraper. There isn't anything new here. We have a bunch of pixel scrapers around for a long time (VNC, RDP, Xpra and, to some degree, NX) and they've all found it useful to use a specific protocol.

Some people like RDP because despite being owned by MS (and subject to patents) the FreeRDP project provides a good implementation of the RDP protocol, which AFAIK, compares favourably with other pixel scrapers.

4 days ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

http://lwn.net/Articles/491509/

Keith Packard has been one of the lead developers of X for ~25 years.
Among his endless contributions, Xrender: http://keithp.com/~keithp/talks/usenix2001/xrender/

about a week ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

You're mixing up three very different methods.

Method 1 is plain dumb pixmap pushing. Applications mostly render client side and then they are constantly pushing the result as large, non-reusable, pixmaps to the server.
Of course, X supports this perfectly. Not only you can push pixmaps over the socket, X has a shared memory extension which lets you do this fast for the local cse.
However, while this works "correctly" over the network, it requires too much bandwidth to be useful. Eg, to push a 800x600 RGB window at 30 fps, you're pushing 43 MB/s. Trivial between two processes in your computer, not so trivial between your home and your work computer.

Method 2 is _clever and effective_ use of server side storage and rendering. The keywords here are _clever_ and _effective_. If your use of Xrender is not _clever and effective_, then it degrades to plain dumb pixmap pushing.
Using Xrender, for example, an application will upload pixmaps/gylphs on the server side. And then it will issue many small rendering commands which refer to the pixmaps/gylphs stored on the server.
Poster child example for Xrender, once the gylphs for all the required characters have been uploaded, Xrender allows applications to draw anti-aliased text using relatively little bandwidth between the client and server.
_Clever and effective_ use of core X primitives and Xrender is is what allows X applications to work over the network effectively.

Problem is that, increasingly, application developers are making less and less _clever and effective_ use of server side rendering. Largely, because as I pointed out before, method 1 works so much better for local clients.

Method 3 is pixel scraping. In this method, we have four instead of two actors. The pixel scraper server presents itself as a (local) X server to the applications but which renders not to a screen but to a frame buffer. The applications draw normally to this (local) X server, they have no idea about the remote display X server.
The pixel scraper server asynchronously scans the frame buffer for changes and sends them to the pixel scrapper client on the other side, usually compressed.The pixel scraper client receives the changes and draws them into the client's X server, which renders them into the screen.
Pixel scrapping also eliminates the latency problem, as the applications see a local server.

RDP works effectively by using pixel scrapping (along Windows drawing primitives too).

about a week ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

Well, I also find strange that people say they see no difference.
My workplace is a simple LAN, with a dozen or so computers attached to a decent switch, perfectly capable of full bandwidth and a less than 200 s pings.
I run remote stuff everyday and I have a bunch of applications, using a bunch of toolkits, that perform between worse-than-local-but-ok to really bad.
From the top of my head, GTK3 applications are the only ones based on a modern toolkit that still run very well.

The fact that Qt5 needs libXrender doesn't mean it has a Xrender based backend.
AFAIK, Qt5 only has two backends: software rasterizer and OpenGL. I've never seen any evidence of other backends.
http://qt-project.org/wiki/Qt5GraphicsOverview

Trying to forward Wayland clients over the X prototol is dumb.
Wayland clients (application) do nothing but to push pixmap buffers to the server. Try to forward that directly over the network in any shape or form and you get terrible performance.
The only solution is either to have a pixel scrapping mechanism or to have Wayland clients (applications) support some other protocol.
Which most Wayland clients (applications) already do: they support X.

about a week ago
top

Direct3D 9.0 Support On Track For Linux's Gallium3D Drivers

raxx7 Re:Is D3D 9 advantageous over 10? (54 comments)

That D3D10 state tracker never made it past an early prototype and, I think, Mesa/Gallium is still lacking for D3D10 support (it doesn't support OpenGL 4.x either yet).

about a week ago
top

Debian Talks About Systemd Once Again

raxx7 Re:The gradual middle road (519 comments)

systemd-journald has long been capable of forwarding the logs to rsyslogd.
And systemd-journald can even be configured to keep it's binary log in /var/run/journal, which gets deleted at each reboot.
And Debian uses this configuration for default.

Unfortunately, if they acknowledged this, systemd haters would be left with one less thing to hate.

Other functions provided by the systemd package (eg, session managment by systemd-logind) are just a lot of work to implement, specially if you try to go for a more decoupled architecture.
Not that people aren't working on it though.

about a week ago
top

Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

raxx7 Re:I still don't see what's wrong with X (225 comments)

That's not what he meant I think.

Modern X applications (well, the toolkits) are resorting so much to simply pushing bitmaps to the X server that X is becoming unusable over the network.
Eg, the Qt developers found sometime ago that their client side rendering backend was much faster for local usage than the Xrender backend. So they made it the default for Qt4.4 And for Qt5 they didn't even bother with a Xrender based backend.

The client side rendering backend pushes so much data Kate is actually painful to use over a Gigabit Ethernet local network, don't even mention working from home. And this can be said for many other applications I use daily.

I run applications remotely on a daily basis. But now it's mostly under Xpra instead of plain X forwarding.
Good performance over the network is buried very deep in the priority list of most graphical application developers, if it's on the list at all.

And an Xpra like solution is something you can implement for Wayland.

about a week ago
top

End of an Era: After a 30 Year Run, IBM Drops Support For Lotus 1-2-3

raxx7 Re:And yet IBM soldiers on... (156 comments)

OS/2 was too advanced for the PC market of the time.
OS/2 was something we take as granted now: a preemptive multi-tasking protected memory OS.
But such an OS had drawbacks: it requires more memory, a bit more CPU and tends not to work well for old applications that were written to run on a bare metal "OS" like DOS.
Although Microsft had one since 1993 (Windows NT), it was not until 2001 (Windows XP) that they were able to converge the PC market into using such an OS.

IBM didn't abandon PowerPC, only a given market segment, that of desktop/laptop processors.
Since Apple was the only customer for those processors, there was not enough money in it to fund competitive designs.
Instead, IBM kept designing and producing PowerPC processors for other market segments: Wii/Wii U, Xbox360 and PS3 all use IBM PowerPC designs.
They also have built a number of supercomputer based on PowerPC CPU designs. And of course, they keep making POWER CPUs for their servers.

about three weeks ago
top

The Quiet Revolution of Formula E Electric Car Racing

raxx7 Re:Quiet? (116 comments)

You can actually do some fun things with it

https://www.youtube.com/watch?v=TDAut8Tlf7w

about 1 month ago
top

The IPv4 Internet Hiccups

raxx7 Re:IPv6 won't fix this problem (248 comments)

While BGP routers need to know route for every prefix, they can then can compress the routing table, by merging prefixes which have the same routing.
The problem is that the IPv4 address space is too fragmented to allow much compression.

IPv6 address allocations should allow for less fragmentation and better compression.

about 2 months ago
top

Linux Needs Resource Management For Complex Workloads

raxx7 Re:complex application example (161 comments)

Interesting. I sounds a bit like an application I have.
Like yours, it involves UDP and Python.
I have 150.000 "jobs" per second arriving in UDP packets. "Job" data can be between 10 and 1400 bytes and as many "jobs" are packed into each UDP packet as possible.

I use Python because, intermixed with the high performance job processing, I also mix slow but complex control sequences (and I'd rather cut my wrists than move all that to C/C++).
But to achieve good performance, I had to reduce Python's contribution to the critical path as much as possible and offload to C++.

My architecture has 3 processes, which communicate through shared memory and FIFOs.
The shared memory is divided into fixed size blocks, each big enough to contain the information for a maximum size jobs.

Processs A is C++ and has two threads.
Thread A1 receives the UDP packets, decodes the contents, writes the decoded job into a shared memory block and stores the block index number into a queue.
Thread A2 handles communication with process B. This communication consists mainly of sending process B block index numbers (telling B where to get job data) and receiving block index numbers back from process B (telling A that the block can be re-used).

Process B is a single threaded Python.
When in the critical loop, it's main job is to forward block index numbers from process A to process C and from process C back to process A.
(It also does some status checks and control functions, which is why it's in the middle).
In order to keep the overhead low, the block index numbers are passed in batches of 128 to 1024 (each block index number corresponding to a job).

Process C is, again, multi-threaded C++.
The main thread takes the data from the shared memory, returns the block index numbers to process B and pushes the jobs through a sequence of processing modules, in batches of many jobs.
Withing each processing module, the module hands out the batch of jobs to a thread pool and back, while preserving the order.

about 3 months ago
top

BMW, Mazda Keen To Meet With Tesla About Charging Technology

raxx7 Re:And again... (137 comments)

It's actually pretty natural.
Usually, standards aren't created in a void. Before a standard can be written and agreed, someone has two design, test, maybe deploy real stuff.
More often than not, different teams will explore different avenues of research. And once they have invested, nobody wants to throw away their work and move to someone else's spec.
On the other hand, this often takes multiple iterations to reach a good standard

If you look at it...
CHADEMO has been around since 2010 and for a while, it was the only real solution.
But Tesla didn't like it, probably because of licensing terms, bulkiness and limited power (62.5 kW max), so they created their own for the Model S.
Other manufacturers and other players did not like it, for similar reasons. And Tesla's solution was proprietary.
So they went ahead and created yet another standard.

Finally, Tesla has decided to play nice.

about 4 months ago
top

BMW, Mazda Keen To Meet With Tesla About Charging Technology

raxx7 Re:nissan or mazda? (137 comments)

Model S' battery has both more capacity and the ability to supply and absorb more power.

I think for a given technology, battery capacity and power (both in and out) tend to be related.

about 4 months ago
top

Microsoft Runs Out of US Address Space For Azure, Taps Its Global IPv4 Stock

raxx7 Re:No more private networks? (250 comments)

By assign, I mean giving them an address.
Whether it's by static configuration, stateless auto-configuration or DHCP.

about 4 months ago
top

Microsoft Runs Out of US Address Space For Azure, Taps Its Global IPv4 Stock

raxx7 Re:No more private networks? (250 comments)

Yes and no.

Yes, you'll need to assign every one of your machines an address which is based on the prefix assigned to you by your ISP.
In the absence of NAT66, your computers will need these addresses to access the internet.

No, you can additionally assign your machines an address based on a unique local address prefix.
You should to use a randomly generated ULA prefix to avoid future conflicts (eg, you need to establish a VPN to another network also using ULA).
But otherwise, it's legal to use a trivial prefix (FD00:whatever).

about 4 months ago
top

Are the Glory Days of Analog Engineering Over?

raxx7 Changing skill sets (236 comments)

As usual, an interesting article with a terrible Slashdot title, overplaying the aspect of analog vs digital.

In fact, it's a lot more a matter of discrete vs integrated.

Decades ago, people designed and built digital logic, entire digital computers even, based on discrete (individual) valves or transistors.
Then, we moved to designing them based on many small integrated circuits which performed simple logic functions, like the 7400 series TTL circuits.
And then we moved to highly integrated digital circuits.
I've used micro-processors, field programmable gate arrays and even designed custom ASICs. But outside school, I never used a 7400 chip, much less design an AND gate based on discrete BJT transistors.
It's all digital, but it's different skill sets.

A similar change of skills happens for analog designers.
Some things that 20 years ago were done by a circuit on a board using discrete transistors are now done inside an integrated circuit.
Again, it's all analog but it's different skill sets.
(Some others, have moved from fully analog to mixed signal, digitizing analog signals and then processing them digitally).

Analog skills are still much needed.
Nowadays, the highly integrated digital circuits communicate between themselves at multi-gigabit speeds and our board designs face issues that, 20 years ago, were a concern only for RF analog engineers

But the needed analog skills are changing.
For example, when designing integrated circuits, it's increasingly necessary to be aware of the physical issues that happen at the sub-micron scale.
Being able to design a RF amplifier based on discrete HBT does not necessarily prepare you do design one in a sub-micron integrated circuit.

about 4 months ago

Submissions

raxx7 hasn't submitted any stories.

Journals

raxx7 has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?