Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Coming Soon: An Open-Source, Reverse-Engineered Mali GPU Driver

timothy posted more than 2 years ago | from the mali-pitchers dept.

Graphics 47

An anonymous reader writes "Next month at FOSDEM there will be an announcement of a fully open-source and reverse-engineered ARM Mali graphics driver for Android / Linux. This driver, according to Phoronix, is said to support OpenGL ES and other functionality from reverse engineering the official ARM Linux driver. Will this mark a change for open-source graphics drivers on ARM, just as the Radeon did for x86 Linux?"

cancel ×

47 comments

Sorry! There are no comments related to the filter you selected.

First post... (-1)

Anonymous Coward | more than 2 years ago | (#38773232)

...whoohoo!

It's a big challenge to reverse engineer (1, Redundant)

Taco Cowboy (5327) | more than 2 years ago | (#38773258)

When vendors decide to keep everything to themselves and leave the users out in the cold, the only way to get in is by reverse engineer

But it is hard

It is especially hard when one has to deal with updated microcodes for each subsequent release of the product

Kudos for those who try to crack the Radeon !

Re:It's a big challenge to reverse engineer (5, Insightful)

TheRaven64 (641858) | more than 2 years ago | (#38773430)

Add to that, most modern GPUs also have a variety of coprocessors for things like H.264 decoding. These are quite often licensed as IP cores from a third party, so a company like nVidia or AMD may not even legally be allowed to provide you with their programming interfaces. To make life even more fun for reverse engineers, they don't document where they licensed these coprocessor cores from anywhere, so it's generally very hard to work out who to contact with a request for documentation. This is why open source drivers tend to miss off some of the features of the proprietary ones: once you've reverse engineered the GPU, there's still a load of other stuff left...

Re:It's a big challenge to reverse engineer (1, Interesting)

walshy007 (906710) | more than 2 years ago | (#38773484)

This is not a problem in linux land, and this is precisely why everything is going over to gallium3d etc.

When it comes down to it all programmable gpus provide similar functionality, shader units, etc etc, galluim3d serves as the abstraction layer so all of the higher level stuff goes on top.

In the end all you need for the driver is the low level workings of it to provide a state tracker to the abstraction layer that it understands, the rest is already done (or in the process of already being done). Code re-use, it is useful.

Re:It's a big challenge to reverse engineer (5, Interesting)

Kjella (173770) | more than 2 years ago | (#38773772)

Add to that, most modern GPUs also have a variety of coprocessors for things like H.264 decoding. These are quite often licensed as IP cores from a third party

Not coprocessors but third party ASICs, particularly for multimedia en/decoding and HDMI audio. That it may be third party IP is only half the problem though, as I understand it AMD's Unified Video Decoder (UVD) is their own yet it's still a paperweight under the open source drivers because of DRM issues. Same with HDMI audio, they're required to provide a Protected Audio Path in order to play BluRays and such. In fact it's also a risk when releasing shader information because part of the H.264 decoding process happens in shaders, at least on AMD and part of the reason the open source driver doesn't get to share more with the proprietary driver.

That said, there's extremely much that could be done without more information too. As I understand it all the shader information to implement OpenGL 4.2 and OpenCL for Evergreen and Northern Islands (AMD HD 5xxx and 6xxx series) is out there, but both Mesa and the drivers need huge amounts of work. The current version of Mesa only supports OpenGL 2.1 but Mesa 8 is supposed to bring OpenGL 3.0 support, OpenCL is still only in proof of concept stages. That should "only" take a hundred full time developers or so, not any more specs. Right now the few that have plenty just keeping up with the huge architecture changes between generations...

Re:It's a big challenge to reverse engineer (1)

Anonymous Coward | more than 2 years ago | (#38774022)

Add to that, most modern GPUs also have a variety of coprocessors for things like H.264 decoding...

This is not true at all for ARM systems. Actually quite the opposite. The Mali 400 is a 3d graphics core *only*. It renders images to a memory buffer and is not even capable of presenting that image on a display. That is the job of the display controller which on your avarage SoC would not be designed by ARM. A PC GPU would normally have dedicated memory, a 3D core, a 2D blitter (on older hardware), a video block and a display controller all integrated on a graphics card. In an ARM SoC all those parts could come from different vendors and would be integrated on the same chip along with e.g. the CPU cores.

This is also why news about mainline linux DRM modules for ARM SoCs rarely have anything to do with 3D graphics. It typically means that a SoC vendor like TI, Samsung or ST-Ericsson has implemented a KMS driver for their display hardware. It has nothing to do with the 3rd party 3D core they are using.

Re:It's a big challenge to reverse engineer (1)

TeknoHog (164938) | more than 2 years ago | (#38774068)

most modern GPUs also have a variety of coprocessors for things like H.264 decoding.

Why is that? They should have plenty of general-purpose silicon for doing it in software. (Mobile devices would be an exception because dedicated hardware is more power-efficient.) In fact, why don't we just write ffmpeg etc. in OpenCL and get this over with?

Re:It's a big challenge to reverse engineer (1)

Narishma (822073) | more than 2 years ago | (#38774726)

Having it in dedicated silicon is more power efficient than doing it on the general purpose shaders. It may also be faster and/or produce better quality depending on whether the workload is suited to the shaders' architecture or not.

Re:It's a big challenge to reverse engineer (0)

Anonymous Coward | more than 2 years ago | (#38777433)

Modded "Redundant" ??

Re:It's a big challenge to reverse engineer (0)

Anonymous Coward | more than 2 years ago | (#38787615)

Modded "Redundant" ??

You found it insightful? or informative? or funny? ...it's pretty much stating the obvious.

propreitary (1, Insightful)

it0 (567968) | more than 2 years ago | (#38773238)

I know they keep the drivers proprietary to keep their special 3d chip tricks to themselves, but can't you just feed it tables of vectors and vectors and be done with it? Why do you need such a low level access that apparantly shows all their company secrets?

Before all you say performance! My question would be , really?

Re:propreitary (-1)

Anonymous Coward | more than 2 years ago | (#38773244)

Your question is, why we should go open-source?
Really?

Have you learned nothing?

This has to be a troll.

Re:propreitary (-1)

Anonymous Coward | more than 2 years ago | (#38773272)

AC idiot completely missed the point.

Re:propreitary (5, Informative)

airlied (135423) | more than 2 years ago | (#38773288)

You haven't looked at 3D programming in 15 years then?

Its just like a CPU, you build shader programs, and optimsing the shader programs requires compilers and writing good compilers is hard and costs lots of money.

Also the other reason is patent infringement, they are all infringing on everyone, just like the rest of the mobile space, so they don't want to make it that easy to show off what they are doing.

Though neither of these reasons are really valid but lawyers and crap engineers like to keep themselves looking good.

Re:propreitary (1)

TheRaven64 (641858) | more than 2 years ago | (#38773442)

writing good compilers is hard and costs lots of money

And GPU companies, by and large, suck at it. Qualcomm has recently been hiring as many people as they can with compiler experience, but neither AMD nor nVidia has a particularly impressive compiler team. Intel does... but they don't work on their GPU stuff.

Re:propreitary (0)

hitmark (640295) | more than 2 years ago | (#38773696)

Not that i would trust Intel, or the rest for that matter, at creating a generic compiler.

Seems the Intel compiler ends up defaulting code back to 386 if the cpu do not provide the right "id".

Re:propreitary (1)

CurryCamel (2265886) | about 2 years ago | (#38776781)

Qualcomm just submitted their Hexagon backend to LLVM, btw.

Re:propreitary (1)

TheRaven64 (641858) | more than 2 years ago | (#38780575)

Ah, nice. I saw they were hiring people for that about 6-8 months ago, and followed some of the progress. I didn't see the code had actually made it in. It's the first DSP back end to be contributed to LLVM, so it will probably be quite useful to other DSP makers as a reference. It will also be interesting to see if TI follows with a C64x back end - currently the only compilers for this DSP family (present in most of the OMAP series) are proprietary.

Re:propreitary (1)

CurryCamel (2265886) | more than 2 years ago | (#38783961)

Not first DSP, but possibly first vendor-backed one. LLVM has (or had at least recently) a blackfin backed, and quick googling shows at least two c64x off-tree backend projects for LLVM.

But I agree - having a commercially backed DSP target is good for the compile framework. Easier to add possible Mali support in the future.

Re:propreitary (2)

Yvanhoe (564877) | more than 2 years ago | (#38773312)

It is about the ability to correct bugs, the ability to improve and maintain the software, even when the company decides linux is not a target anymore (Like Nvidia did for its Optimus technology). Also, authorizing a binary blob to be executed on your machine whith no ability to check what happens in it is a security problem. There could be backdoors in these drivers.

Re:propreitary (4, Insightful)

TheRaven64 (641858) | more than 2 years ago | (#38773350)

Increasingly, GPUs are just general-purpose processors that are optimised for a very different set of algorithms to CPUs (i.e. stream-based access to memory instead of lots of locality of reference, primarily floating-point vector data instead of integer data, and few branches instead of about one every 7 instructions on average for CPUs). This means that a GPU driver is increasingly just a compiler. There is a lot less of a reason to keep the details of the hardware instruction set secret, because, as with something like ARM or x86, the valuable bit is how it's implemented, not the instruction set itself. This also means that there's a lot of incentive to keep the in-house drivers secret, because the difference between a bad compiler and a good one can easily be a factor of two in terms of performance with real code and sometimes a lot more.

Re:propreitary (0)

Anonymous Coward | more than 2 years ago | (#38773504)

The reason for keeping the instruction set secret is that sure as hell you don't want to get stuck with it.
As a secondary point, you just as much don't want to be stuck with having to document it.
Both get even more weight since you don't really want people to know about all the undefined behaviour and sometimes even ridiculous bugs your compiler has to work around.
Which is amplified by that you also don't want your competitors to write a demo that does something ridiculous just to run into one of your bug-workarounds and thus giving horrible performance only one your GPU.

Re:propreitary (0)

Anonymous Coward | more than 2 years ago | (#38773564)

Why would a GPU manufacturer be "stuck" with anything? Developers do not write software to target any given GPU directly; they use abstractions such as OpenGL/DirectX and OpenCL. From a developers perspective, the underlying instruction set or architecture of the GPU is irrelevant. There's a long history of GPU manufacturers chucking everything out and starting again with totally different architectures.

Re:propreitary (0)

Anonymous Coward | more than 2 years ago | (#38773620)

People have been writing code in ptxas for nVidia (for CUDA). I don't know if anyone was crazy enough to take that one step further and write real assembler code,
So there is a real risk that the easier you make it the more people will skip the abstractions (admittedly only really in the GPU compute area).
Also the driver (at least compiler part) certainly does depend on the assembler code. With it open source there is a higher risk development will go into a direction that will make things difficult for your next generation.
I doubt it's a good enough argument on its own, but together with other "risk" and skepticism towards open-sourcing drivers it's not surprising few will want to go that way.

Re:propreitary (2)

TheRaven64 (641858) | more than 2 years ago | (#38773692)

PTX is already a high-level abstraction, but if you use it directly then you are basically stuck with your code only working on nVidia GPUs (or, on a subset of recent nVidia GPUs, depending on which PTX features you use). If you write assembly directly for any given GPU, then you lose portability. This means that 99% of developers won't, because no one will buy a game that requires a specific make and model of GPU (unless it happens to be a model used in a console). For some categories of user, however, that doesn't matter. If you are running a simulation on a 1,000-node cluster of nVidia GPUs and tweaking the assembly can make it 10% faster, then that's the equivalent of buying 100 more nodes for your cluster: a huge saving. The cost in terms of portability is pretty low, because you're probably going to rewrite the code before you get a different cluster anyway.

Re:propreitary (1)

TheRaven64 (641858) | more than 2 years ago | (#38773760)

The reason for keeping the instruction set secret is that sure as hell you don't want to get stuck with it.

Why would you be stuck with it? You're stuck with x86 because it was the target for compiled code decades ago. No one is going to be distributing binary code for the latest nVidia or AMD code, at the most they're going to be distributing some kind of IR like PTX, or Gallium IR. Each generation of AMD and nVidia GPU has a different instruction set, and distributing code in binary form for every GPU would be insane (especially since even ones that share an instruction set often have different performance characteristics, so JIT or install/launch-time compiled code will be faster on any random target machine).

As a secondary point, you just as much don't want to be stuck with having to document it.

You have to document it for your compiler team anyway...

Both get even more weight since you don't really want people to know about all the undefined behaviour and sometimes even ridiculous bugs your compiler has to work around.

Why not? CPUs have errata, as do most devices. When was the last time you avoided a CPU or a NIC because of the documented errata that drivers / microcode need to work around?

Which is amplified by that you also don't want your competitors to write a demo that does something ridiculous just to run into one of your bug-workarounds and thus giving horrible performance only one your GPU.

Security through obscurity doesn't work. It's easy to profile binary code on any GPU, find something it sucks at, and write a contrived demo that performs badly on it. It is for CPUs too. This is why no one trusts benchmark suites written by a single vendor. You may remember in the late '90s a certain vendor tried this, and their reputation suffered - reviewers really don't like companies that cheat on benchmarks.

Re:propreitary [sic] (0)

Anonymous Coward | more than 2 years ago | (#38774832)

I've seen a fair number of embedded platforms (routers, phones, consoles, etc.) where there are alternative firmwares available, but they all run some old kernel version because that's the only one some driver blob supports. Most commonly it's for the wi-fi chip or the GPU.

Intel, AMD and nVidia need the competition (1)

G3ckoG33k (647276) | more than 2 years ago | (#38773298)

Intel, AMD and nVidia need the competition, from the outside.

Especially Intels complacency in the CPU space is worrisome.

The GPU hardware arena still is outpacing anyone's expecations, so it is really really good with this open-source initiative.

Kudos!!!

Re:Intel, AMD and nVidia need the competition (0)

Anonymous Coward | more than 2 years ago | (#38773954)

Do you read the summary, ever?

Re:Intel, AMD and nVidia need the competition (0)

Anonymous Coward | more than 2 years ago | (#38774124)

Did you even bother reading the title? So you are rooting for competition, and bad mouthing the one that has made one of the biggest efforts to provide source code to the community? do you think it's great for whom that it's other the one that needed to reverse engineer a driver, which you have absolute no idea of its performance? Gee, sometimes I really think some accounts in slashdot really are supported by marketing companies.

The way I saw it (1)

G3ckoG33k (647276) | about 2 years ago | (#38777047)

The way I saw it:

Intel is a tremendous contributor the the open source world!

Intel, due the lacklustre performance of AMD CPU performance holds back on its releases of multiple CPUs; think of the two(!) inactive cores in the latest series. Not, because they think AMD are good guys, but Intel's CPU are so much better they could have readily killed off AMD, using bad tactics used by other comapnies. But, Intel are intelligent people, unlike many other's in the industry. Intel keeps the competition at a safe distance, in the CPU arena.

In the GPU corner it is a different matter. nVidia has some recent success in the top500 charts, and of course has a big chunk of the top-end of the GPU market. AMD also has a big chunk of the top end GPU market. In the mid and low segment, it is very different. There Intel has gained a very big chunk, thanks to its ever improving integrated graphic parts of the recent CPUs. AMD is showing some even more impressive steps, with their latest offerings. Unlike Intel or AMD, nVidia has no genuine part of the mass-market mid or low segment CPU market.

ARM? ARM is a newcomer to the consumer level market, in the sense of home PC, gaining traction from below, originally as the main force in the mass-production of low-end kitchen hardware CPU space. Recent progress in the mobile space has made it possible, in part thanks to the hardware abstraction of Android, to also get some attention from Microsoft and others. Now, they are also getting some attention from the server market. So, they, are after a new market,which traditionally was closed to them. ARM is very new to the graphics market and I wasn't really aware these processors had such capacity beyond the mobile space.

So, in light of that it was good to see some open source efforts on that hardware too!

If I understand the situation correct, Intel, AMD and nVidia all retain their closed, proprietary binaries in parallel with any open source effort. Where would ARM be different in that? Or did I misunderstand something. Maybe i did, but please enlighten me beyond referring to "the summary" or "the title", which do not contradict what i understood. Or do they? I just don't see it.

Re:The way I saw it (-1)

Anonymous Coward | more than 2 years ago | (#38779763)

Intel are a bunch of lazy asses...witness the celeron and muliticore laptop chips whithout speed governors so the fan runs full bore the whole time... way to go guys ... not :o/

This is good news (3, Informative)

DarkOx (621550) | more than 2 years ago | (#38773322)

Thanks to this effort we are much closer to being able to run a traditional GNU/X.org userland on these devices if desired. Just work out the details of the radio hardware and it should be possible to roll your own mobile distro pretty soon without having to be hardware expert

Re:This is good news (0)

Anonymous Coward | more than 2 years ago | (#38774754)

Yes, because GNU/X is so much more supperior when it comes to touchscreens we all love and adore. Also go read up on Wayland and why it's being developed.

Re:This is good news (1)

Korin43 (881732) | more than 2 years ago | (#38775763)

Also go read up on Wayland and why it's being developed.

Because in the open source community, if you discover a problem, you can fix it?

Re:This is good news (0)

Anonymous Coward | more than 2 years ago | (#38776033)

My experience of the opensource community with regards to graphics drivers is really 100% they cannot fix it.

(r200 - Xig's driver is so much better its not even funny).

Evergreen even though the specs were released the drivers are crap and incomplete.

Wayland is not needed (If XiG can do it someone who knows enough about X should be able to)

Don't like the blatent disregard of the Linux community for other users of the codebase either.

Re:This is good news (2)

serviscope_minor (664417) | about 2 years ago | (#38776635)

Also go read up on Wayland and why it's being developed.

To bring the future of GUIs (such as remote display, etc) to Linux?

Re:This is good news (1)

Korin43 (881732) | more than 2 years ago | (#38782735)

Also go read up on Wayland and why it's being developed.

To bring the future of GUIs (such as remote display, etc) to Linux?

I hope you're trolling. [wikipedia.org]

Re:This is good news (1)

Vegemeister (1259976) | more than 2 years ago | (#38779613)

Until there's a reliable implementation of 'ssh --wayland' Wayland is a non-starter.

They better have power saving support (1)

Anonymous Coward | more than 2 years ago | (#38773556)

If this thing ever wants to be more than a curiosity for some hobbyist who has a religious opposition to closed-source drivers, then these drivers had better be competitive in getting the GPU into power-saving mode. That's an even bigger deal than performance in some ways, because if the open source drivers kill your battery even faster than it already dies on most smartphones, it won't be too popular.

An easy way to fix the problem (-1)

Anonymous Coward | more than 2 years ago | (#38773950)

You're welcome. [letmebingthatforyou.com]

Re:An easy way to fix the problem (0)

Anonymous Coward | more than 2 years ago | (#38774070)

Why would they want to install the world's most virus and malware infested OS?

Already partially open source? (2, Informative)

Anonymous Coward | more than 2 years ago | (#38774222)

As far as I can tell, ARM already provides GPLv2 implementations for the 2D parts of the driver, so what's new here is the 3D stack.

http://www.malideveloper.com/developer-resources/drivers/open-source-mali-gpus-linux-exadri2-and-x11-display-drivers.php [malideveloper.com]

What a waste of time... (0, Troll)

Anonymous Coward | more than 2 years ago | (#38774706)

If you've ever dealt with the NVidia or AMD driver development teams you know they have a lot of programmers working their asses off to keep the stuff from crashing. There is no way an open source effort will have the resources to produce a stable high-performance driver for even remotely recent hardware. Good luck, but they don't stand a chance.

Re:What a waste of time... (1)

expatriot (903070) | more than 2 years ago | (#38775588)

Mali-400 will be a huge effort to reverse engineer, but what hope is there for the next generation with OpenCL and DirectX.

I did not know the MALI driver was closed source (-1)

Anonymous Coward | more than 2 years ago | (#38775166)

All I can say now is fuck ARM and fuck ARM graphics. If you can't supply an open driver then I'm not interested. I've seen the mess that closed video drivers made of the Desktop.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?