Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!



EU, South Korea Collaborate On Superfast 5G Standards

YoopDaDum Re:How much more can we squeeze? (78 comments)

As said by others the fundamental limit is given by Shannon. This defines a maximum throughput given a spectrum bandwidth and S/N ratio. In current technologies we're pretty close to this. This also indicates how to increase the total throughput, which can comes from:

  • - Adding channels. This is what MIMO spatial multiplexing (SM) is about;
  • - Increasing the used spectrum bandwidth. There is a lot of spectrum at high frequencies, with new challenges, and one option for 5G is to use this;
  • - Increase the signal to noise ratio. This is what beam-forming is about.

Having more MIMO SM layers (i.e. concurrent channels) is not practical. The complexity of a MMSE decoder isO(L^3) with L the number of layers, so it gets ugly quickly. Today MIMO SM is typically limited to 2 layers in practice, with 4 likely coming and 8 the practical limit (and that may not be so practical really...).

Using very high frequencies (above 10 GHz) gives access to a lot of free spectrum, but the higher one go the lower the reach for a given power budget. To compensate for the high attenuation this is coupled with massive multi-antennas, the talk for 5G is 64 to 256. This is split between a few very costly MIMO SM layers and the rest for cheap beam-forming. So for example 256 antennas would behave like four 64 patches BF antennas for 4 layers MIMO. Of course with that many antennas and RF transceiver you have to compromise in cost and quality. So it's a lot of poor receive chains, vs. a few very high quality ones today. But there's still the potential to gain overall.
It has challenges though: it will still be for small cells (low reach) and rather low mobility (the beam steering cannot track high speed mobiles, plus small cells don't work wall for highly mobile devices: too many handovers). But because most people are low speed and the places where capacity is most needed are urban centers where small cells are ok, it still can be a win.

But as one can see, high speed 5G won't be universal like 4G is. By this I mean that 4G can (and will) completely replace 2G and 3G in time, while this high frequencies / massive BF 5G could only complement 4G is high density urban places, but will never be suitable for lower density parts (rural) where 4G would stay.

And then there's the elephant in the room: a lot of the improvements in telecoms have been riding on Moore's law. With the scaling problems that start now to be more openly discussed, how much more processing power we can use for 5G and what the users are prepared to pay (cost and power) for all these improvements are interesting questions.

about 3 months ago

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

YoopDaDum Re:what the FEC... (129 comments)

FEC is commonly used in streaming over IP applications to support lost packets, see for example the "above IP" FEC used in LTE multicast (eMBMS) or DVB-H, Raptor. In those applications the L2 transport has its own error detection scheme, so IP packets will typically be either fully lost or ok (in other words, the probability of having a corrupt packet is very low).

about 4 months ago

Linux 3.15 Will Suspend & Resume Much Faster

YoopDaDum Re:not fast (117 comments)

The sloppiness in the PC ACPI world can also bite Windows too. You can find nice Asrock mini PCs based on laptop hardware. If you look at a Tom's hardware review of the Asrock VisionX 420, with a mobile core and mobile AMD GPU, you'll see that it consumes 28W at idle. This is crazy high for what's in effect a mobile laptop in small form factor box. One of the big reason is that the system ACPI says that PCIe ASPM (the low-power mode of PCIe) is not supported. Configuring laptop-mode on Linux and forcing ASPM results in idling at ~12.5W only, and a quieter box. Enabling ASPM in power saving mode alone saves ~8W, the rest is due to other suboptimal Windows defaults I guess.

So IMHO the "let's do the minimum and ship" to save a buck approach of PC vendors hurts Windows and linux both. On the Windows side you'll usually get something considered good enough for its product class (here power was not considered as relevant for an HTPC it seems) but likely not optimal. On Linux you will get a mess by default because as you say vendors can't be bothered with it. But with some work you can actually get something quite good.

about 5 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum Re:The title is a bit misleading (32 comments)

Absolutely, and that's why I'm talking about "peak throughput" above and not only throughput. Increasing capacity allows to either offering higher average throughput, or offering the same average throughput to more devices concurrently, or any mix in between. It does matter, and actually changing field a bit capacity in the wireless cellular world is what really matters: stretching things a bit to make a point you could say that in cellular peak throughput is for marketing, capacity improvements are for real life concerns (increasing average throughput / reducing cost per bit). Unfortunately the later is more complicated to sell so is often not put forward, but that's where the real work is.

Still, it's too bad that on a geek site one has to pass a capacity improvement as a peak improvement to make it more sexy. Let's learn to love capacity for its own sake ;)

about 5 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum Re:A rare sight (32 comments)

Thanks! The MAC layer principles are the same in 802.11ac than in 802.11n. The MU-MIMO feature is really made for the AP to stations direction (downlink). The AP can decide on the combined transmission on its own, based on queued packets: if the AP has packets buffered for several stations that can be sent using MU-MIMO, it can merge them into a single PHY access. There's a gotcha for the ACK part, you don't want the receiving stations to ACK at the same time but for this block ACK is used for all stations but one. So those other stations will wait for a block ACK request from the AP, and the AP makes sure there is no collision.

For more details I can recommend "Next Generation Wireless LANs" by E. Perahia and R. Stacey. I have no relation to the authors, but I've used their book and found it good. Warning: it's only if you really want to go into the details (but it beats the IEEE spec handily on readability ;)

about 5 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum The title is a bit misleading (32 comments)

The title could lead some to believe that MU-MIMO is increasing the peak throughput, which is not the case. Spatial multiplexing (SM) MIMO allows to have as many independent concurrent streams as there are antennas on receiver and transceiver (the min of both sides actually). So with 4 antennas on the AP and 3 on the station for example, you can have 3 streams. With SU-MIMO, all three streams are used between the AP and a single station. With MU-MIMO the AP can use its streams with more stations: for example 1 stream to station A, and 2 streams to station B. There is a little bit of degradation of course compared to single use. It's a win when you have for example a 4 antennas AP and only 2 antennas stations, then instead of leaving half the capacity on the floor you can make use of all the streams. But it doesn't increase the peak rate possible with SU-MIMO, it increases the AP capacity when devices do not have as many antennas as the AP, which is the usual case.

about 5 months ago

Shuttleworth Wants To Get Rid of Proprietary Firmware

YoopDaDum Re:Some context from a hardware perspective. (147 comments)

True, but at least on a PC the main CPU could protect itself against any device being an IOMMU. And in smartphones ARM also has an IOMMU IP (called SMU). I don't think it's commonly used yet, but on principle at least it is possible to hook any smart peripheral with an embedded processor (or several nowadays for a 4G cellular modem for example) to the SMU, and then the SMU to the SoC NoC providing access to the system memory. Then the user application processor (AP) can restrict any memory access from those devices.

So even if such a smart peripheral is exploited, if it sit behind an IOMMU/SMU the AP can restrict to which parts of the memory it can access, just like a MMU provides such protection between processes. The peripheral, even if inside the SoC die, can be sandboxed.

Is this how it is done today? I guess not. For SoCs, I'm not sure a lot of chip vendors pay for the additional SMU IP, unless they have another use-case that requires it (like supporting virtualization). And even if the hardware is there, I'm not sure it is actually used for security containment as I described. Could a linux expert comment on that for the PC world at least?

Now of course an IOMMU/SMU will never protect against an embedded chip purposefully designed to be able to take control of the chip and access system memory without restriction, like the management chip on some Intel chips (vPro feature IIRC). But as you point out there are a lot of smart peripherals, and if we could at least mitigate exploits from them it'd be that many less potential holes.

Last point, security always has a cost: someone needs to handle the IOMMU/SMU management to do the sandboxing. And a sandbox may prevent zero copy, requiring extra copying from the AP in some cases (to move data from AP reserved memory to/from a given device sandbox).

about 6 months ago

Stop Trying To 'Innovate' Keyboards, You're Just Making Them Worse

YoopDaDum Re:Bad example. (459 comments)

But it has a very bad idea, which cannot be fixed: the mechanical functions keys are gone. Instead you have touche sensitive keys, with no travel. One can adapt toa new layout for a product one use often and long (ok, after some grumbling...), and with some utilities remap keys if needed (switching ctrl/caps lock is very common). But you can't add real keys where there are none here. And you can't select an alternative keyboard with the usual functions keys either.

It's very ironic that their new slogan is "Thinkpad, for those who do". I guess it's for those who don't do enough to remember each function key role in the applications they work with, and need an icon on the key to remember it. And use all this infrequently enough not to care about touch sensitive keys.

I'm for one very disappointed with this nonsense. I was looking forward to treat myself with a high definition screen, long battery life light laptop with this update. I guess Asus or somebody else can thank the "innovators" at Lenovo who pushed this crap through.

about 8 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:All the other OS, too. (352 comments)

Then criticize such designs for being insecure, I'm fine with that. But rightfully criticizing a weak design, and condemning the whole of the cellular world are two different things. The article does the later and that's why I think there is exaggeration (with some errors and misunderstandings too).

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:All the other OS, too. (352 comments)

In theory it is possible. In practice the network operator would see the extra traffic (so the NSA too it seems ;), so the risk to get caught red handed seems way too high to me. It could kill a brand.

It's up to everyone to decide on their paranoia threshold and what they feel they need to protect against. But anyone who really cares about its own security and would trust 1) it's smartphone including a lot of closed source software even with Android 2) a complex baseband stack 3) a hugely complex operator network or even many for an international call, just doesn't get security. You want real secure? You need a trusted terminal host stack and and end-to-end secure connection (SRTP, or VoIP over a VPN, etc.). Then you don't have to trust anything in between as anything else just see encrypted traffic. You still have to trust the terminal host stack, so best if one can audit it. This is overkill for most. But that's what secure phones are, with a price to match.

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:Over-the-air Security Protocols (352 comments)

Encryption is optional but typically always set in production network.

For the downgrade attack, I completely agree and mention this elsewhere. When a next gen cellular technology is out there's not much changes/updates on the previous ones as operator focus their investments in the latest. And due to this we still have the 2G issue you mention. That's unfortunate, but in practice I believe the only solution to this will come when 2G will be phased out. It's still a few years away in most countries, and has been done already in some places (some big operators in Japan, South Korea).

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:Exploits for baseband processors (352 comments)

"Certification requirements" is the key thing here, and it's a lot of work for vendors (can't really be lazy and succeed in this space ;).

Spectrum is a shared medium, and the worst jammer is a buggy device. Because of this there are strict certification requirements before being legally allowed to put a device over the air. And going through all the associated tests cost a lot of money: it's a lot of time with expensive testing hardware and in the field (after passing the "safe for network" part). It's expensive both for operators and vendors by the way. This cost make everyone quite conservative. Any change will go through a cost assessment. So it's not that they don't care about security, they do and even if there has been holes in the old versions a lot of work go into making the system secure. But not at any price.

To be practical, the target is good enough security for the average person. And that's ok. If you really have higher protection requirements than this, there is no way but having your own controlled end-to-end scheme. I would expect anyone claiming security is critical and taking this seriously to have figured this out.

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:Over-the-air Security Protocols (352 comments)

Hi there. I'm not following 3G closely but in LTE the encryption schemes are secure. You have two options, both 128 bits: SNOW 3G (inherited from 3G as you can guess ;) and an AES scheme. Both secure as of today. In R10 or R11 a Chinese scheme called ZUC has been added too, also 128 bits. The operator decides on which scheme is used, and the device must support both SNOW 3G and AES today.

The big thing is that the encryption is between the device and cell (base station). The assumption is that the cell is secure, and behind the operator network is secured by other means. So it's important to protect the cell (eNB in LTE) against compromises. A fake cell won't work as in LTE the authentication is mutual: the UE won't work with any cell, except for an emergency call.

For more details have a look at the 3GPP 33.401 spec, for example the latest R9 version.

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:Old silent SIM firmware (352 comments)

No. The SIM is powered from the baseband, and when the baseband is off the SIM has no power supply and can't do anything. Plus the SIM can only communicate with cell towers through the baseband, never on its own. The SIM cannot wake-up the baseband on its own, enabling the radio subsystem can only be done from the host processor. So what you described is not possible.

What is possible however is that when your device cellular radio is on and the baseband is enabled, then the SIM can directly use the baseband to communicate with the network using what is called the SIM Toolkit (STK). This can be done with or without the user being informed. The STK also many features like transforming the numbers you dialed (to seamlessly add a routing prefix, or redirect), filter calls (block or accept), get and report a location, etc. The specs are public, look for 3GPP TS 31.048 and ETSI 102.223 (using USAT and CAT instead of STK, but it's all the same under different names).

about 10 months ago

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:All the other OS, too. (352 comments)

I believe the article has some gross exaggerations, and I'm in the baseband business. Of course I can't speak for all implementations, so this is my opinion only.

When the baseband is in a separate die, connected with some interface like SDIO for QCOM, HSI, USB HSIC, ... there is no way that the baseband will control any host resources (unless it can exploit a bug in the host software of course). When the baseband is in the same die as the application processor (AP) and its resources, it becomes at least possible in theory for the BB to access AP resources. But think about it: why do we have process memory isolation and MMUs in the first place? And a kernel sitting between hardware and user space? For security and fault isolation. Do you really want to be the poor engineer having to debug a complex system on chip (SoC) where a bug in the BB part can create weird bugs in completely unrelated parts of the system handled by different teams? That looks like a recipe for disaster. In the systems I work on you have hardware isolation between subsystems to prevent just this. And then a compromised BB can't do a lot of damage (same as for a separate die BB).

I believe the article is a bit sensationalistic and miss the real danger: a compromised base station. That's what the source articles quoted talk about. If you can compromise a cell you can spy traffic without any attack on the UE (encryption is only between device and cell). A fake cell is an issue with 2G but since then authentication is mutual: in LTE a device do authenticate the cell too, and won't work with a fake one. But that doesn't protect against a compromised cell. This is a risk with small and femto cells mostly, as macro cells are easier to protect. The only interest as see in compromising the BB is to use it as a vector to attack the host processor (which has been done), where you have access to much more interesting stuff. This requires a security exploit on the host side too. On its own the BB isn't really very interesting as an attack target.

While I'm at it, there are others not very serious claims here. The fact that one can redirect calls to voice mail with an AT command has nothing to do with baseband security. An baseband support a control interface, and even usually two: 1) a modern but proprietary interface and 2) the standard but old fashioned AT interface. You can do a lot with these commands, no need to compromise the BB. But normally such access is limited to trusted applications, so if anyone can access this it's a host security issue, not a baseband issue.
The baseband doesn't contain one RTOS but usually several instances. There's at least one RISC core (typically ARM), possibly more. At least one DSP, possibly more. With likely more than one OS: having an instance running linux is common, with other(s) on RTOS or even bare bone schedulers (depending on the complexity of the task at hand and timing constraints). That can vary a lot depending on each BB design, but as a rule of thumb for a modern LTE capable BB expect two RISC cores and two DSPs (YMMV).
The mutual authentication I've talked about already. Here the practical issue is that when the next gen is out there's not much interest in doing big upgrades to previous generations. So the lack of network authentication in 2G will stay with us until 2G is phased out, which is still a few years away in most places (big Japan networks have already killed 2G however).

about 10 months ago

Imagination Tech Announces MIPS-based 'Warrior P-Class' CPU Core

YoopDaDum Small correction on coherency (122 comments)

The P5600 core is being touted as supporting up to six cores in a cache-coherent link, most likely similar to ARM's CCI-400.

The CCI-400 is not relevant here. In both MIPS and ARM worlds CPUs are now multi cores capable out of the box. One cluster can be configured from 1 to 4 cores typically, and here for this latest MIPS up to 6. The L2 management is handled as part of the cluster, which also typically supports coherency with external hardware accessing the L2 through one or several coherency port(s). The L1 cache(s), the L2 and the hardware are kept coherent inside a cluster (with some limitations at times on the low end, there are variants). All this can be taken for granted at the high-end, as here.

Now what the CCI-400 does is different: it extends coherency management between several clusters. This is very important in the ARM world because of the big.LITTLE scheme: you want the big cluster and little cluster to be kept coherent to speed-up and easy the transition between the low-power and high performance modes (that also helps when all cores are used at all times, as the OS can migrate tasks between cores more efficiently).

about a year ago

Broadcom Laying Off LTE and Modem Design Employees

YoopDaDum Re:Renesas Mobile (71 comments)

No, they will keep the Renesas team I believe as they need them. And the people they're laying off were recently (3 years ago, around October 2010) acquired. In short, historically Broadcom had its own 2G and 3G with associated engineers. When they looked at LTE, instead of developing the technology in-house they acquired a start-up called Beceem. Beceem was doing WiMAX successfully, and as everyone doing WiMAX were in the process of switching to LTE and had started announcing they were on it.

Three years later and Broadcom had no working LTE. Even Intel, who started announcing its first LTE chip in 2011, is starting to show hardware 2 years later. It's likely Broadcom management is not happy at all about the situation, and in the end what's happening here is that they're sacking the Beceem team to replace it with the Renesas LTE team. As they make a point of saying they only acquired the LTE assets of Renesas, they may keep the historical Broadcom 2G/3G people. I'm only guessing from public info here, YMMV.

Now I'm NOT laying the blame on the Beceem guys here. Integration in a bigger company can be complicated, with lots of turf wars. It's very possible that the in-house Broadcom 2G/3G team had their own in-house plans to develop LTE (skunk-work style maybe) and didn't see the Broadcom acquisition in a good light. As LTE needs to be integrated with 2G/3G you need some good cooperation between the teams. It's also possible that some key Beceem guys were not happy and left, leaving the rest in trouble. They could have over promised too. What I mean is that there are many ways to fail in such an acquisition, I don't know what happens and can't and won't lay the blame on anyone. I'm just trying to clarify the situation a bit on what happened.

Now the fact that it is said that only Renesas LTE is interesting. Renesas had a full 2G/3G/LTE system as far as I know. Are they cutting the 2G/3G out to keep their own to please the internal guys? How easy/fast such surgery will be? We're not at the end of the story...

about a year ago

Lenovo "Rips and Flips" the ThinkPad With New Convertible Helix Design

YoopDaDum Re:And yet they still... (143 comments)

Fortunately you can swap them in the BIOS and restore sanity.

about a year ago

big.LITTLE: ARM's Strategy For Efficient Computing

YoopDaDum The full Moore law (73 comments)

In the original 1965 paper Moore was talking about the density of the least-cost process, and this part is often left out when summarizing what Moore's law is. The full more law is "the density of the least-cost process doubles every 18 months". This means that Moore's law can fail in two ways: if we can't get any denser (not yet) or if we can get denser, but not at a lower cost too (now?). The following says that the "full Moore law" stops at 28 nm:

Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace.

The economic part is often left out on tech sites discussions, but it matters a lot. Up to now we had a sustainable situation where the cost of new processes increased regularly, but at the same time eventually the cost of the new process was lower. This allowed to get all on board and to also increase the reachable market, to get more revenues. That's why we have small micro-controllers everywhere nowadays.

Now when the cost of new processes increases, only the part of the market that trully need the improved density and performance will move on. And that's only a small part of the whole market. So we will have increasing costs, with a reducing addressable market. Double whammy. Expect end prices for high performance to rise quickly. That may slow down things significantly.

We'll see how it develops soon, but I would expect the economic to bite before we reach tech limits.

about a year ago

ARMs Race: Licensing vs. Manufacturing Models In the Mobile Era

YoopDaDum Re:Shades of grey not black and white (54 comments)

It's "process" in a very general meaning here. There is a whole stack before getting to a physical embodiment on the silicon die from the design RTL. Small clients will just use the approved tools (from Cadence, Synopsys, Mentor...) with ready made libraries of cells/components (many providers, and ARM is one). The big guys have the man-power to define their own cells based on their own trade-offs to get higher performance (at the price of larger area / higher power) or lower power depending. They can also get some customization on the low-level tools. This enables to get more design margins when going to a new node very early to compensate for the fact that the technology is not mature yet for example. All of this is very labor intensive so is only for the big fabless players. You could summarize all this by saying that they have access to the very bottom layers of the stack. And as said elsewhere, all of this is a given for Intel with an even deeper integration between design and process.

about a year ago


YoopDaDum hasn't submitted any stories.


YoopDaDum has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>