Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



In Paris, Terrorists Kill 2 More, Take At Least 7 Hostages

YoopDaDum Re:Why didn't they take them alive? (490 comments)

The 4 hostages were killed by the terrorist earlier on when he entered the supermarket, not during the assault. On both sites the terrorists had booby trapped the place with several sticks of explosives already connected to a detonator. It seems a pretty good explanation for "why didn't they try to take them alive?": killing them was the fastest and safest way to make them harmless in this situation.

about three weeks ago

How the Rollout of 5G Will Change Everything

YoopDaDum Re:Rollout in 2030 (216 comments)

I can't tell either ;) The principle is an extension of an existing trend: in dense areas, you need tight synchronization among cells to reduce interference and improve the system capacity and quality of service. Existing evolutions of LTE already specified but not yet deployed in the field include things like joint scheduling / cooperative beam forming among cells for example. The pCell idea is pushing the idea to the extreme: it's an integrated cooperative system, where each node is a set of antennas and a device is not associated to a node but cooperatively handled by several. Now there is limit in this approach: in practice you need a centralized scheduler ("Cloud RAN" in marketing speeches) and very low latency between it and the nodes, and this limits practical deployments to dense areas. So it's not a universal solution, but it has potential for where congestion is in practice. Another thing is that although there's a lot of work and momentum on this idea, it's still rather young and it seems not so easy to make even the less radical LTE variants work as well as planned in practice. As often, devil is in the details. But I can't comment much more there: telecoms is big and I'm not involved in this area.

about 2 months ago

How the Rollout of 5G Will Change Everything

YoopDaDum Re:Latency (216 comments)

Yes, it's marketing spin again. There are plans to reduce the latency compared to LTE (which was already a big improvement). The 1 ms looks more like a target for the RAN (radio access network) part of the network only. But even today with LTE the RAN is not the main contributor to the end-to-end latency, the core network is the bigger budget even looking only at the wireless telco part, and then the Internet part must be added on top.

about 2 months ago

How the Rollout of 5G Will Change Everything

YoopDaDum Re:Rollout in 2030 (216 comments)

According to theoriginal ITU position to be truly called 4G one need to be able to do at least 1 Gbps peak rate downlink. In order to comply with this requirement LTE release 10 added a new category, Cat 8, doing actually close to 3 Gbps. On paper: nobody implemented it yet --- and it'll be a while before anyone does (if ever: it takes 8x8 MIMO and 5 aggregated 20 MHz LTE channels to reach 3 Gbps).

I'm a telecom professional, and I'm tired about all those "true 4G" statements, and on what is or not 4G. I find the ITU 4G definition ridiculous: a long time ago the world of telecom manufacturers was made of cautious engineering companies. Then very aggressive new entrants came and made outrageous claims [1], and older companies went with the charade not to be seen as lagards. That's basically why we got this very bad joke of "official 4G is 1 Gbps". I guess anyone looking around should see the slight disconnect with reality there? As a bonus joke, new categories were added later on: Cat9 peaks at 300 Mbps, go figure...

For what it's worth, in my opinion the true difference that warrants using a new generation number is the move to OFDMA. 1G was analog, 2G was digital narrow band, 3G is wideband CDMA, 4G is wideband OFDMA. This makes sense to me, as a telecom engineer. The ITU BS I'd rather forgot all about it, it's just too embarrassing.

As for 5G there are interesting things on-going, but it's very early in the game. For now it's only people wanting attention to get funding (like TFA) or cheap PR. Don't feed the PR spinners please. The high-frequency spectrum with many very small antennas and cheap RF (to compensate for the number, 64-256...) is interesting but there is a long road to practical products.

[1] There is a joke on this, and let's protect the culprit: how do you tell the difference between an Ericsson engineer and a Xxx one? The E/// engineer couldn't tell a lie if you put a gun to his head. The Xxxx engineer couldn't tell the truth. I work for neither E/// nor Xxxx BTW.

about 2 months ago

Intel Claims Chip Suppliers Will Flock To Its Mobile Tech

YoopDaDum Re:Cost per wafer? (91 comments)

Intel are a process node ahead of the competition

This needs to be qualified. It is true for the high-performance chips, but for mobile and tablets we're talking about SoCs. As far as I know Intel is still not shipping any SoC chip in 14nm and such things are quite a way out in 2015. Intel mobile chips are still in 22nm today, whereas TSMC is in volume with 20nm (Apple latest APs, QCOM latest LTE modems). Intel may leapfrog TSMC in 2015, but the gap for low-cost mobile SoC is not as big as people often think. Intel is the king of high performance, for low-cost good-enough mobile SoC : not so much.

about 3 months ago

Study: There's a Wi-Fi Hotspot For Every 150 People In the World

YoopDaDum Re:WiFi in France (63 comments)

I can't speak about all ISPs as I don't know their offer, but I can speak for Free that I use and is very popular. After all, they came up with the first box and were also the first to include WiFi.

you have no control over that hotspot

Wrong, with Free you can decide to turn the hotspot off completely, turn in on but keep in private (for your own personal use only), or turn it on and also share it.
If the hotspot is off or private, you can't use other Free APs for your own use. If you share, you have free access to any Free hotspot. Up to you to decide.

the company uses your payed line to make more money

As explained above, it goes both way. I have an extra service too for free, which is why my WiFi is shared. And other users are always handled at a lower priority, so it's really transparent to me: they just get the unused capacity on my line.
The wifi access is not directly monetized, in that no-one pay for it. But it certainly make Free more attractive as an operator: their boxes are very popular, so you can get wifi coverage mostly everywhere in a city as long as you participate in the sharing.

you have no control to your router whatsoever

You have control over some basic configuration like the DHCP configuration, basic port forwarding and IPv6 enabling for example. But it's true that the router belongs to Free and you don't have direct access like you could have on an OpenWRT box. If you want this you can put your own router behind, it's a bit wastefull but I did it at one point to have more control over DNS for example.

you have to login to the company's website and see what limited options they provide you

Yes, all the configuration is done through Free web interface and pushed to the box from their network.

in France I had terrible problems with latencies and ofc with Youtube

There was quite a big fight between Free and Google a few months back. Same kind of conflict as between ISPs and big network users like Google, Netflix all over with ISPs trying to get money from them. It seems it's been resolved recently for Youtube, it was really awful at the worst of the clash but is now ok (I'm not a big user though). Still, I'm waiting to see how it'll go with Netflix now they're present in France and if that kind of problem will happen again. I'm not super optimistic, but I've heard there are discussions happening. As I understand it the ISPs would prefer to be able to offer Netflix on their boxes with a cut of the profit in exchange of good network quality, and the big discussion is on the percent of profit for this... We'll see.

about 3 months ago

Gigabit Cellular Networks Could Happen, With 24GHz Spectrum

YoopDaDum Re:too much multi pathing at that frequency (52 comments)

This is made for small cells indeed. Any time you see 5G with gigabit speed, it's only for high density areas covered with small cells. 4G will still be there for larger cells (coverage layer in high density areas, and less dense areas).

Also, there are ways to fight multipath. The primary one used in this scheme is beam-forming with a pretty dense array --- what I've seen on pre-5G tests typically use between 64 to 256 antennas. You then get a very narrow beam, and a primary path that is well above secondary reflections. Of course for this to work the base station must be able to precisely track the device, and that's another rarely mentioned limitation of such high-speed 5G scheme: it's mostly for static / pedestrian usage. You can extend this to vehicular (some demos are using cars) by having some smarts about anticipating movements (constrained to roads, etc.) and a less narrow beam (but then you loose SNR and reduce spectral efficiency). But mostly I would expect such 5G to be for the majority of slow moving devices, with fast moving ones pushed to an overlay of bigger 4G cells (so less HO too). This is consistent with small cells. And besides BF, I guess as 4G the waveform will use OFDM/OFDMA to also help fight multipath.

Last comment: as usual the marketing communication focuses on high bandwidth but the true interest is more capacity. So in real life instead of having a few users at super high throughput such a 5G network will be able to support more users at lower (but still very high) average throughput by multiplexing them.

about 3 months ago

Internet Companies Want Wireless Net Neutrality Too

YoopDaDum Re: Wireless bandwidth is limited (38 comments)

Don't you want to discriminate voice over bulk data to make VoLTE calls work even when the cell is at full load during peak time? That is what is done, and it's not only QoS: at the radio level the way a voice connection is handled is different. To optimize for voice, even if it's voice over IP, LTE does use different techniques (SPS, ROHC, TTI bundling...) that adds complexity to the system.

And although overload doesn't always happen, it's something to be dealt with. We don't yet have the technology to offer unlimited capacity at peak time in dense areas at a price people are ready to pay for. So somewhere, sometimes, cells will be overloaded and we need to think about how to deal with this.

Not discriminating traffic like voice vs. download on a congested cell doesn't make sense to me as a user (I don't work for an operator). Net neutrality for wireless would need to be more subtle than "every packet is equal". Like supporting different traffic classes, with possibly different pricing, but making access to such classes open to all service providers under the same conditions. Today QoS is only for the operator services for example, an OTT player can only get special QoS if it has a deal with the operator. While this would be nice there is significant complexity there in term of access (need a standard API to let an application request some QoS level), access control (better to reject a call than to provide useless crippled voice for example. This is done today for voice, would likely need to be more powerful) and billing (need to be open, secure, simple).

It's already complex to make VoLTE works fine, I don't see this happen anytime soon just based on the underlying complexity. Seeing a reasoned discussion on this addressing the real challenges of wireless would be a good start to prepare the ground, but I don't think we're even close to this.

about 3 months ago

Samsung Achieves Outdoor 5G Mobile Broadband Speed of 7.5Gbps

YoopDaDum Re:As a good Slashdotter I didn't RTFA (36 comments)

LTE is not done and is continuing its evolution. To give a rough idea recent products are LTE "release 10" (R10), and standard work will start in a few months on R13 that will not cover anything 5G yet. R13 should arrive in the field in ~2017. This Samsung demo is not a standard yet, it's more a technology evaluation / advanced work that will only land in real products in a few years.

It's likely that a real production 5G will come from within 3GPP, the organization that standardize 2G/3G/4G. At every big transition some people try to go for it with a completely different standard (for 4G: Qualcom UMD, WiMAX) and it may not be different with 5G, but it would be very unlikely to succeed IMHO. The technology demonstrated here is not universal: it can only work in very dense area. Which is fine, that's also where we need added capacity. But it means that whereas in time LTE can fully replace 2G and 3G, 5G will be designed to coexist with 4G and will never replace it. At best, you'll have LTE in low-density areas, and 5G in dense areas. And even in dense areas there may be a 4G coverage umbrella to provide service continuity.

There's a lot of hype and BS in wireless, so take all throughput / generation targets with a big grain of salt... LTE Advanced defines a "category 8" that goes up to 3 Gbps for example, but it's a joke to get the IMT 4G stamp. Already the initial LTE defined a category 5 that no product ever implemented. It was just there to match the WiMAX 2 peak target rate. It was bollocks and unpractical and nobody cared once WiMAX 2 died. Similarly, the people at IMT got over-excited and stuck in a hype loop, and defined real 4G has the ability to support 1 Gbps. It was nonsense at the time and still above what's practical. So what did LTE-A did? It introduced realistic new categories 6 and 7 with 300 Mbps down, and a BS category 8 at 3 Gbps. So on paper LTE-A is 4G, because of a category 8 that nobody will implement anytime soon if ever. I've seen pedants saying LTE is no real 4G but LTE-A is because only LTE-A makes the 1 Gbps IMT target: what a joke!

The high rates of 5G as demoed by Samsung use a very different approach. Much higher frequency allowing larger channels and data rates. Also the size of the antennas shrinks with a higher frequency, so it becomes possible to use many small antennas in a device. Each receive path is quite poor compared to LTE to keep the cost down, but it's compensated by a lot of them. These many antennas are not used for massive spatial multiplexing (SM) MIMO, which would be too computationally expensive, but for a few SM layers as today and beamforming as beamforming is cheap. It's a bit early to say it will work well in real life, but it looks promising and worth pursuing.

about 3 months ago

Google Announces Motorola-Made Nexus 6 and HTC-Made Nexus 9

YoopDaDum Re:HTC (201 comments)

They've been sloppy with updates a few years back at their peak of popularity. I guess they had became complacent then. With their recent challenges it seems they have understood the importance of keeping your customers happy, and hence loyal long term, instead of pressuring them to update faster by not supporting old models and just succeeding in annoying people. It's long term vs. hypothetical short term gain. I'm glad they've taken the long view in the end.

Now I find they take updates of "old" phones seriously at least for their flagships (can't tell for the others). I have the first One (M7), it's about 18 months old now and I have Android 4.4.3 with Sense 6 on it, just as the latest One M8. I had a handful of updates since I bought the phone, tracking Google new versions closely. HTC announced Lollipop will be supported on the M7 too. Typically all the big updates are made available within 3 months or so of Google release. If they keep it like this (nice phones with updates), I'll probably stick with them when replacement time come.

One important point: I have an unlocked phone, bought without contract and independently from my operator. Updates may not be as fast when the phone is bought with a contract through an operator, but then the issue would be the operator not HTC.

about 3 months ago

LTE Upgrade Will Let Phones Connect To Nearby Devices Without Towers

YoopDaDum Re:They've reinvented CB radio! (153 comments)

Yes it will. One of the driver for D2D is public safety, where the network may not be available. Think of a situation after a big earthquake or hurricane, where cell towers have been damaged and the cellular coverage is patchy or entirely gone in some areas. Then D2D can be used locally by public safety people to communicate with anyone having a LTE device supporting D2D (and the vision is that in time, everybody will). D2D will support both this offline / local mode and a network assisted mode when you're under cellular coverage. There's also a hybrid case where one party is under coverage and the other is not.

There are other use cases for D2D, but I must say I find most of the "end user" ones gimmicky. Besides public safety, which can potentially be really useful IMHO, there are other interesting use cases for M2M / infrastructure, like supporting car-to-car communications (assisted driving in the future) and coverage extension (mesh like, although the issue there is always the impact on the relay devices: would you like a meter in the basement draining your smartphone battery? You would need user acceptance, and then it gets complicated).

about 4 months ago

EU, South Korea Collaborate On Superfast 5G Standards

YoopDaDum Re:How much more can we squeeze? (78 comments)

As said by others the fundamental limit is given by Shannon. This defines a maximum throughput given a spectrum bandwidth and S/N ratio. In current technologies we're pretty close to this. This also indicates how to increase the total throughput, which can comes from:

  • - Adding channels. This is what MIMO spatial multiplexing (SM) is about;
  • - Increasing the used spectrum bandwidth. There is a lot of spectrum at high frequencies, with new challenges, and one option for 5G is to use this;
  • - Increase the signal to noise ratio. This is what beam-forming is about.

Having more MIMO SM layers (i.e. concurrent channels) is not practical. The complexity of a MMSE decoder isO(L^3) with L the number of layers, so it gets ugly quickly. Today MIMO SM is typically limited to 2 layers in practice, with 4 likely coming and 8 the practical limit (and that may not be so practical really...).

Using very high frequencies (above 10 GHz) gives access to a lot of free spectrum, but the higher one go the lower the reach for a given power budget. To compensate for the high attenuation this is coupled with massive multi-antennas, the talk for 5G is 64 to 256. This is split between a few very costly MIMO SM layers and the rest for cheap beam-forming. So for example 256 antennas would behave like four 64 patches BF antennas for 4 layers MIMO. Of course with that many antennas and RF transceiver you have to compromise in cost and quality. So it's a lot of poor receive chains, vs. a few very high quality ones today. But there's still the potential to gain overall.
It has challenges though: it will still be for small cells (low reach) and rather low mobility (the beam steering cannot track high speed mobiles, plus small cells don't work wall for highly mobile devices: too many handovers). But because most people are low speed and the places where capacity is most needed are urban centers where small cells are ok, it still can be a win.

But as one can see, high speed 5G won't be universal like 4G is. By this I mean that 4G can (and will) completely replace 2G and 3G in time, while this high frequencies / massive BF 5G could only complement 4G is high density urban places, but will never be suitable for lower density parts (rural) where 4G would stay.

And then there's the elephant in the room: a lot of the improvements in telecoms have been riding on Moore's law. With the scaling problems that start now to be more openly discussed, how much more processing power we can use for 5G and what the users are prepared to pay (cost and power) for all these improvements are interesting questions.

about 7 months ago

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds

YoopDaDum Re:what the FEC... (129 comments)

FEC is commonly used in streaming over IP applications to support lost packets, see for example the "above IP" FEC used in LTE multicast (eMBMS) or DVB-H, Raptor. In those applications the L2 transport has its own error detection scheme, so IP packets will typically be either fully lost or ok (in other words, the probability of having a corrupt packet is very low).

about 8 months ago

Linux 3.15 Will Suspend & Resume Much Faster

YoopDaDum Re:not fast (117 comments)

The sloppiness in the PC ACPI world can also bite Windows too. You can find nice Asrock mini PCs based on laptop hardware. If you look at a Tom's hardware review of the Asrock VisionX 420, with a mobile core and mobile AMD GPU, you'll see that it consumes 28W at idle. This is crazy high for what's in effect a mobile laptop in small form factor box. One of the big reason is that the system ACPI says that PCIe ASPM (the low-power mode of PCIe) is not supported. Configuring laptop-mode on Linux and forcing ASPM results in idling at ~12.5W only, and a quieter box. Enabling ASPM in power saving mode alone saves ~8W, the rest is due to other suboptimal Windows defaults I guess.

So IMHO the "let's do the minimum and ship" to save a buck approach of PC vendors hurts Windows and linux both. On the Windows side you'll usually get something considered good enough for its product class (here power was not considered as relevant for an HTPC it seems) but likely not optimal. On Linux you will get a mess by default because as you say vendors can't be bothered with it. But with some work you can actually get something quite good.

about 10 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum Re:The title is a bit misleading (32 comments)

Absolutely, and that's why I'm talking about "peak throughput" above and not only throughput. Increasing capacity allows to either offering higher average throughput, or offering the same average throughput to more devices concurrently, or any mix in between. It does matter, and actually changing field a bit capacity in the wireless cellular world is what really matters: stretching things a bit to make a point you could say that in cellular peak throughput is for marketing, capacity improvements are for real life concerns (increasing average throughput / reducing cost per bit). Unfortunately the later is more complicated to sell so is often not put forward, but that's where the real work is.

Still, it's too bad that on a geek site one has to pass a capacity improvement as a peak improvement to make it more sexy. Let's learn to love capacity for its own sake ;)

about 10 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum Re:A rare sight (32 comments)

Thanks! The MAC layer principles are the same in 802.11ac than in 802.11n. The MU-MIMO feature is really made for the AP to stations direction (downlink). The AP can decide on the combined transmission on its own, based on queued packets: if the AP has packets buffered for several stations that can be sent using MU-MIMO, it can merge them into a single PHY access. There's a gotcha for the ACK part, you don't want the receiving stations to ACK at the same time but for this block ACK is used for all stations but one. So those other stations will wait for a block ACK request from the AP, and the AP makes sure there is no collision.

For more details I can recommend "Next Generation Wireless LANs" by E. Perahia and R. Stacey. I have no relation to the authors, but I've used their book and found it good. Warning: it's only if you really want to go into the details (but it beats the IEEE spec handily on readability ;)

about 10 months ago

New MU-MIMO Standard Could Allow For Gigabit WiFi Throughput

YoopDaDum The title is a bit misleading (32 comments)

The title could lead some to believe that MU-MIMO is increasing the peak throughput, which is not the case. Spatial multiplexing (SM) MIMO allows to have as many independent concurrent streams as there are antennas on receiver and transceiver (the min of both sides actually). So with 4 antennas on the AP and 3 on the station for example, you can have 3 streams. With SU-MIMO, all three streams are used between the AP and a single station. With MU-MIMO the AP can use its streams with more stations: for example 1 stream to station A, and 2 streams to station B. There is a little bit of degradation of course compared to single use. It's a win when you have for example a 4 antennas AP and only 2 antennas stations, then instead of leaving half the capacity on the floor you can make use of all the streams. But it doesn't increase the peak rate possible with SU-MIMO, it increases the AP capacity when devices do not have as many antennas as the AP, which is the usual case.

about 10 months ago

Shuttleworth Wants To Get Rid of Proprietary Firmware

YoopDaDum Re:Some context from a hardware perspective. (147 comments)

True, but at least on a PC the main CPU could protect itself against any device being an IOMMU. And in smartphones ARM also has an IOMMU IP (called SMU). I don't think it's commonly used yet, but on principle at least it is possible to hook any smart peripheral with an embedded processor (or several nowadays for a 4G cellular modem for example) to the SMU, and then the SMU to the SoC NoC providing access to the system memory. Then the user application processor (AP) can restrict any memory access from those devices.

So even if such a smart peripheral is exploited, if it sit behind an IOMMU/SMU the AP can restrict to which parts of the memory it can access, just like a MMU provides such protection between processes. The peripheral, even if inside the SoC die, can be sandboxed.

Is this how it is done today? I guess not. For SoCs, I'm not sure a lot of chip vendors pay for the additional SMU IP, unless they have another use-case that requires it (like supporting virtualization). And even if the hardware is there, I'm not sure it is actually used for security containment as I described. Could a linux expert comment on that for the PC world at least?

Now of course an IOMMU/SMU will never protect against an embedded chip purposefully designed to be able to take control of the chip and access system memory without restriction, like the management chip on some Intel chips (vPro feature IIRC). But as you point out there are a lot of smart peripherals, and if we could at least mitigate exploits from them it'd be that many less potential holes.

Last point, security always has a cost: someone needs to handle the IOMMU/SMU management to do the sandboxing. And a sandbox may prevent zero copy, requiring extra copying from the AP in some cases (to move data from AP reserved memory to/from a given device sandbox).

about 10 months ago

Stop Trying To 'Innovate' Keyboards, You're Just Making Them Worse

YoopDaDum Re:Bad example. (459 comments)

But it has a very bad idea, which cannot be fixed: the mechanical functions keys are gone. Instead you have touche sensitive keys, with no travel. One can adapt toa new layout for a product one use often and long (ok, after some grumbling...), and with some utilities remap keys if needed (switching ctrl/caps lock is very common). But you can't add real keys where there are none here. And you can't select an alternative keyboard with the usual functions keys either.

It's very ironic that their new slogan is "Thinkpad, for those who do". I guess it's for those who don't do enough to remember each function key role in the applications they work with, and need an icon on the key to remember it. And use all this infrequently enough not to care about touch sensitive keys.

I'm for one very disappointed with this nonsense. I was looking forward to treat myself with a high definition screen, long battery life light laptop with this update. I guess Asus or somebody else can thank the "innovators" at Lenovo who pushed this crap through.

1 year,12 days

The Second Operating System Hiding In Every Mobile Phone

YoopDaDum Re:All the other OS, too. (352 comments)

Then criticize such designs for being insecure, I'm fine with that. But rightfully criticizing a weak design, and condemning the whole of the cellular world are two different things. The article does the later and that's why I think there is exaggeration (with some errors and misunderstandings too).

about a year ago


YoopDaDum hasn't submitted any stories.


YoopDaDum has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?