Hardware registers can be defined by volatile bitfields so a simple pointer can be used to access them.
Unless, of course, you have multiple registers with ordering constraints between them (e.g. write some data into one register, toggle a flag in another), because the volatile keyword in C does not give any guarantees of ordering between accesses to different volatile objects and the compiler is completely free to reorder the write to the flag before the write to the data.
That is what barriers are for to guarantee the ordering or the use of accessor functions. I'm not saying that volatile is a cure-all since it is not. It is a tool that needs to be understood. On many architectures one also needs to understand that a write to memory may not actually write to memory. For example, on MIPS a SYNC instruction is needed to flush the write buffer to guarantee that the data has been written to cache memory. On processors that are not cache coherent a cache flush is also required for data structures stored in memory that are accessed via DMA by hardware and a cache invalidate is needed before the hardware updates the memory before the CPU can read it.
Things like interrupt handlers are fairly trivial to code in C.
As long as someone else is writing the assembly code that preserves state for the interrupted context, prevents the interrupt handler from being interrupted, and so on, and then calls into the C code. And with those constraints, pretty much any compiled language is equivalent.
In C the context is usually fairly small compared to many other languages.
One can do interesting things using the linker with C code that are not really possible with most other languages. For example, I can easily link my code to execute at a particular address and generate a binary without any elf headers or any other cruft and there are interesting things that can be done with linker scripts
That's pretty much true for any compiled language.
Not really. Many compiled languages need a lot of support to handle things like memory management which C does not.
There is no unexpected overhead due to the language. There is no background garbage collection that can run at some inopportune time. There's no extra code to do bounds or pointer checking to slow down the code or even get in the way.
The flip side of this is that you either do the bounds checking yourself (and often get it wrong, leading to security vulnerabilities) and end up passing it through a compiler that isn't designed to aggressively elide bounds checks that it can prove are not needed.
Languages that do bounds checking and other hand holding generally don't work well on bare metal and in low-level (i.e. device drivers) environments. If you're relying on the language to catch your bugs then you have no business writing code in this sort of an environment. In this sort of environment a pointer often needs to be a pointer with no hand holding or the compiler or runtime trying to second guess the programmer. Compilers and languages do not understand hardware.
Generally it is pretty easy to move between different versions of the toolchain. C generally doesn't change much.
I can't tell from the Internet. Did you actually say that with a straight face? Even moving between different versions of the same C toolchain can cause C programs that depend on undefined or implementation-defined behaviour (i.e. basically all C programs - including yours given that several of the things that you listed in another post in this thread as really nice features of C are undefined behaviour, such as casting from a long to a pointer) are now optimised into something that doesn't do what the programmer intended.
Yes I did and one can, within reason, generally not have issues moving between different versions of a compiler. In C, generally a long is the size of a pointer in most ABIs, and I write code that deals with switching between 32 and 64-bit ABIs. If you're talking about 16-bit or Microsoft then things are often pretty screwed up, but generally speaking on 32-bit and 64-bit platforms the size of a long follows the size of a pointer (except in Windows where in environments like LLP64 a long is 32-bits, but then who uses Windows for IoT).
I have seen far fewer changes with C regarding different toolchain versions than I have other languages (i.e. C++). I have over 25 years of experience working in this type of environment. My experience with changing toolchain versions with C has generally been pretty painless compared to my experience with, say, C++. On my C++ project we couldn't even move the toolchain by even a minor revision because all hell broke loose and there was no way to work around it.
I've been writing bare-metal software since the 1980s before I was in high school, done Linux kernel porting, device drivers for a variety of operating systems and worked on many embedded systems. I've written PC BIOS, device drivers for Linux, MS DOS, OS/2, Solaris, VxWorks and U-Boot. I've ported the Linux kernel to an unsupported Powerquicc processor and worked on numerous Linux device drivers. I also work with a number of custom environments and have written numerous bare-metal bootloaders from scratch, and by bootloader I'm not talking GRUB since in these cases there is no BIOS and DRAM is not initialized. My code is the first thing that runs straight from the reset vector. I also work with a lot of embedded multi-core chips (up to 96 cores) spread across two chips in a NUMA architecture. I've worked in drivers for everything from PCI, I2C, memory initialization (DDR2/DDR3/DDR4), USB, SATA, SPI, high-speed networking, everything from ATM (Asynchronous Transfer Mode) 25M/155M/622M/2.4G, Ethernet 100Mbps, 1G, 2.5G, 5G, 10G, 25G and 40G, Wifi, high speed serial and other technologies as well as FPGAs and networking phy chips (try dealing with analog issues at 26GHz!). I've worked with X86, Sparc, MIPS, ARM, PowerPC, Arduino and some other exotic architectures. I also deal with things like handlers for things like ECC errors in memory, cache issues, high-speed SerDes and a lot of hardware issues. At any given time I have at least two hardware schematics open (the smallest being around 40 pages) as well as numerous hardware reference manuals which are often in the thousands of pages.
I also was responsible for the data path and a good chunk of the control plane of a complex router which ran multiple instances Cisco IOS for the control plane where my code performed all packet manipulation for every sort of interface imaginable (including a lot of DSL), MPLS and many other protocols. So yes, I can with a straight face say this.