Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Tegra 4 Likely To Include Kepler DNA

Soulskill posted more than 2 years ago | from the when-two-processors-love-each-other-very-much dept.

Graphics 57

MrSeb writes "Late last week, Jen-Hsun Huang sent a letter to Nvidia employees congratulating them on successfully launching the highly acclaimed GeForce GTX 680. After discussing how Nvidia changed its entire approach to GPU design to create the new GK104, Jen-Hsun writes: 'Today is just the beginning of Kepler. Because of its super energy-efficient architecture, we will extend GPUs into datacenters, to super thin notebooks, to superphones.' (Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.) This has touched off quite a bit of speculation concerning Nvidia's Tegra 4, codenamed Wayne, including assertions that Nvidia's next-gen SoC will use a Kepler-derived graphics core. That's probably true, but the implications are considerably wider than a simple boost to the chip's graphics performance." Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.

Sorry! There are no comments related to the filter you selected.

Paper tiger (4, Insightful)

Anonymous Coward | more than 2 years ago | (#39527843)

So will this version be something more than a paper tiger? So far the Tegras have sounded better on paper than their real world performance ends up being.

Re:Paper tiger (1)

Anonymous Coward | more than 2 years ago | (#39528087)

I don't know, the Tegra2 was kind of mediocre, but the Tegra3 in my Transformer Prime is everything I'd hoped for and everything it claimed to be.

Re:Paper tiger (-1, Troll)

sonicmerlin (1505111) | more than 2 years ago | (#39529833)

Doesn't even come close to the iPad 3's quad-channel DDR quad-core GPU.

Re:Paper tiger (-1)

Anonymous Coward | more than 2 years ago | (#39530681)

Warning :

Kool Aid alert! Oh yeah!

Re:Paper tiger (1, Informative)

Anonymous Coward | more than 2 years ago | (#39531081)

The Tegra 3 has a quad core CPU while the A5X only has a dual core CPU.

Re:Paper tiger (-1)

Anonymous Coward | more than 2 years ago | (#39532397)

The Tegra 3 came out more than 3 months before. Also, why the fuck is this comment modded up? Are there Apple shills in the house today?

Re:Paper tiger (2)

Lunix Nutcase (1092239) | more than 2 years ago | (#39533575)

Yes, which means it should be far more advanced than the A5X. The A5X is the same A5 from March of last year doing nothing but bumping the core count in the GPU and adding some memory. The GPU in it is just the 3 year old SGX543 that is also underclocked ain comparison to something like the PS Vita that uses the same GPU core but at a higher clock speed. So the Tegra should have had no issues beating the A5X in every test hands down.

Re:Paper tiger (0)

Anonymous Coward | more than 2 years ago | (#39533099)

I bet you were wanking over a picture of Jobs while you wrote that

Re:Paper tiger (0)

Anonymous Coward | more than 2 years ago | (#39535905)

When iPad can run Android and cost under $400 then maybe your comment will be relevant.

Re:Paper tiger (1)

TwoBit (515585) | more than 2 years ago | (#39538833)

According to this week's Anandtech's article, the Transformer Prime Tegra 3's CPU outperformans the iPad 3. It's the GPU that falls short of the iPad 3, mostly due to having less GPU transistors and not due to architectural weakness.

Wow (3, Funny)

Hatta (162192) | more than 2 years ago | (#39527859)

Where did they get Johannes Kepler's DNA?

Re:Wow (3, Funny)

Anonymous Coward | more than 2 years ago | (#39527901)

From the outside of Tycho Brahe's fake nose.

Don't ask.

Re:Wow (0)

Anonymous Coward | more than 2 years ago | (#39527955)

I thought this was announcing that nVIDIA's Kepler processor uses DNA computing [wikipedia.org] , and that Tegra 4 will too!

Re:Wow (0)

Anonymous Coward | more than 2 years ago | (#39528007)

The story does not specify which Kepler's DNA they used. David Kepler of Maine seems a viable source of 'Kepler' DNA.

Re:Wow (2)

bacon.frankfurter (2584789) | more than 2 years ago | (#39528091)

They cloned it from Kepler's blood taken from a mosquito fossilized in amber, obviously. DNA on a chip. Makes perfect sense. Kepler's laws of planetary motion probably add a significant boost to pipeline performance. And what better way to integrate that functionality than by cloning Kepler himself, and regrowing his brain on a chip! I was wondering how long it would take to grow a brain on a chip, after they successfully created a gut on a chip [slashdot.org]

Re:Wow (1)

Anonymous Coward | more than 2 years ago | (#39528849)

They cloned it from Kepler's blood taken from a mosquito fossilized in amber, obviously. DNA on a chip. Makes perfect sense. Kepler's laws of planetary motion probably add a significant boost to pipeline performance. And what better way to integrate that functionality than by cloning Kepler himself, and regrowing his brain on a chip!

You know a lot about him. You must have read his orbituary.

Re:Wow (1)

Anonymous Coward | more than 2 years ago | (#39528155)

His mom was a witch, so I suspect some devilry was involved...

Re:Wow (2)

Ironchew (1069966) | more than 2 years ago | (#39528279)

Nvidia's next-gen SoC will use a Kepler-derived graphics core.

When did Johannes Kepler solve a graphics core?

Re:Wow (0)

Anonymous Coward | more than 2 years ago | (#39528419)

It was received with a radio telescope, with kind instructions on how to combine it with our DNA...

Re:Wow (1)

steelfood (895457) | more than 2 years ago | (#39529057)

If it's a graphics processor they were looking to make, they probably should've gone with Michelangelo or Leonardo's DNA.

Re:Wow (0)

Anonymous Coward | more than 2 years ago | (#39529093)

They prey it from his cold dead hands.

Didn't see already see the $200 tablets? (0)

Anonymous Coward | more than 2 years ago | (#39527871)

And didn't they complete suck if their name wasn't Kindle Fire?

Re:Didn't see already see the $200 tablets? (1)

yurtinus (1590157) | more than 2 years ago | (#39528745)

Yeah, that bit had me confused too...

Codenamed Wayne? (0)

Anonymous Coward | more than 2 years ago | (#39527963)

They're calling these "super"foo, and the Tegra 3 (Kal-El) fits fine...

But Batman isn't superpowered, he's just a normal human like you or I might be if we had suffered severe psychological trauma as a child, and had a resulting obsession with vigilante justice. Oh, and were incredibly wealthy.

Re:Codenamed Wayne? (0)

Anonymous Coward | more than 2 years ago | (#39528647)

No, but Batman becomes awesome because of the technology he has. Unlike Kal-El, who is naturally powerful, Wayne is engineered for awesomeness. See what they're getting at?

Personally I am waiting for... (0)

Anonymous Coward | more than 2 years ago | (#39528005)

...when the Ultra [insert gadget] age comes about.

Superman has nothing on Ultraman.

And now for the obvious question... (0)

Anonymous Coward | more than 2 years ago | (#39528115)

...which should have been pre-empted in the summary:

WTF is Kepler?

Re:And now for the obvious question... (1)

wagnerrp (1305589) | more than 2 years ago | (#39530355)

Since you can't even be bothered to read the first sentence in the summary, it is the code name for their new GPU architecture, starting with the GTX680.

Re:And now for the obvious question... (3, Informative)

Anonymous Coward | more than 2 years ago | (#39530377)

WTF is Kepler?

A new gaming-oriented GPU from NVidia that can't compete with even the previous generation of GPUs (Fermi) on many compute applications, namely integers. It's fine if you do single-precision floating point stuff all the time, but terrible if you want to work with integers or double-precision floats.

Changing marketing terms (3, Funny)

T.E.D. (34228) | more than 2 years ago | (#39528153)

Nvidia calls Tegra-powered products 'super,' as in super phones, super tablets, etc, presumably because it believes you'll be more inclined to buy one if you associate it with a red-booted man in blue spandex.

Wayne-powered products will of course be called "bat" instead.

Re:Changing marketing terms (0)

Anonymous Coward | more than 2 years ago | (#39528443)

Or you can get the Oliver knock-offs with the Arrow-theme.

Re:Changing marketing terms (0)

Anonymous Coward | more than 2 years ago | (#39528955)

So what will they call Wayne-powered car systems?

failzoRs? (-1)

Anonymous Coward | more than 2 years ago | (#39528233)

Recruitment, but lube. This can lead fucking market prospects are very Th(is post up.

Still Waiting (1)

jmDev (2607337) | more than 2 years ago | (#39528271)

...on quantum GPU and CPU to go commercial.

nVidia's CEO is a little behind the times. (2)

bistromath007 (1253428) | more than 2 years ago | (#39528283)

The droidpad I'm posting this from cost $200.

Re:nVidia's CEO is a little behind the times. (0)

Anonymous Coward | more than 2 years ago | (#39529739)

The droidpad I'm posting this from cost $200.

All right then... what model is it, and how happy are you with it?

I have a Nook Tablet, and I'm very happy with the hardware, happy with it as a book reader, but I wish it was a real Android tablet (with Bluetooth and Android Market^H^H^H^H^H^H^H^H^H^H^H^H^H^HGoogle Play).

Still bitter about "Android pod touch" delay (1)

tepples (727027) | more than 2 years ago | (#39530435)

Likewise, I wish my Archos 43 Internet Tablet had come with the application formerly known as Android Market. But for three years after the iPod touch gained its app store, after Microsoft had long abandoned the Pocket PC, Google had some sort of aversion to making a PDA; early Android compatibility definition documents had working cellular data capability as an absolute requirement. Apple had a monopoly on PDAs that come with access to the platform's flagship app store until Google finally relented sometime around October 2011 and allowed Samsung to sell the Galaxy Player.

Re:nVidia's CEO is a little behind the times. (2, Informative)

Anonymous Coward | more than 2 years ago | (#39531545)

I have a Nook Tablet, and I'm very happy with the hardware, happy with it as a book reader, but I wish it was a real Android tablet (with Bluetooth and Android Market^H^H^H^H^H^H^H^H^H^H^H^H^H^HGoogle Play).

So? Pick from CM7 [xda-developers.com] or CM9 [xda-developers.com] .

Enjoy.

Re:nVidia's CEO is a little behind the times. (1)

bistromath007 (1253428) | more than 2 years ago | (#39538149)

It's a Le Pan, their older model. (They only have two.) The hardware is a bit dated, but the thing does exactly what I need it to. Given that I got it shortly before becoming homeless, it's literally saved my life by allowing me to esearch schedules, resources, and bureaucracy at will. I miss my Real Computer, but this datapad has proven worth three times its weight in gold, in a form factor that is much more convenient and appealing to me than either a laptop or a smartphone.

GPU programming is a nightmare. (3, Interesting)

140Mandak262Jamuna (970587) | more than 2 years ago | (#39528793)

It is possible to use the GPU effectively to speed up some scientific simulations. Usually in fluid mechanics problems that could be solved by time marching (or physics that obey hyperbolic governing differential equations). But working with the GPU is a real PITA. There is no standardization. There is no real support for any high level languages. Of course they have bullet points saying "C++ is Supported". But you dig in and find, you have to link with their library, there is no standardization, you need to manage the memory, you need to manage the data pipe line and fetch and cache, the actual amount of code you could fit in their "processing" unit is trivially small. All it could store turns out to be about 10 or so double precision solution variables and about flux vector splitting for Navier Stokes for just one triangle. About 40 lines of C code.

On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

Then their sales critter create "buzz". Make misleading, almost lying, presentations about GPU programming and how it is going to achieve world domination.

GPU programming nightmare: improvements are commin (2)

IYagami (136831) | more than 2 years ago | (#39529923)

It is possible to use the GPU effectively to speed up some scientific simulations. Usually in fluid mechanics problems that could be solved by time marching (or physics that obey hyperbolic governing differential equations). But working with the GPU is a real PITA. There is no standardization. There is no real support for any high level languages. Of course they have bullet points saying "C++ is Supported". But you dig in and find, you have to link with their library, there is no standardization, you need to manage the memory, you need to manage the data pipe line and fetch and cache, the actual amount of code you could fit in their "processing" unit is trivially small. All it could store turns out to be about 10 or so double precision solution variables and about flux vector splitting for Navier Stokes for just one triangle. About 40 lines of C code.

  On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

Then their sales critter create "buzz". Make misleading, almost lying, presentations about GPU programming and how it is going to achieve world domination.

According to wikipedia, there are frameworks (like Open CL http://en.wikipedia.org/wiki/OpenCL [wikipedia.org] ) in order to program in high level languages and have compatibility through various platforms

Re:GPU programming is a nightmare. (1)

vadim_t (324782) | more than 2 years ago | (#39529929)

Just like any cutting edge tech. Not so long ago you'd be writing graphics code in assembler. And dealing with the memory restrictions DOS had to offer.

On top of everything, the binary is a mismash of compiled executable chunks sitting in the interpreted code. Essentially the if a competitor or hacker gets the "executable" they can reverse engineer every bit of innovation you had done to cram your code into these tiny processors and reverse engineer your scientific algorithm at a very fine grain.

Big deal. It's funny how touchy people get the moment they do something vaguely original. The GPU's architecture is known, the optimization strategy for it is well documented. A big part of what you'll end up writing is just following the device's constraints, and so not really original.

Aren't scientists supposed to share data and knowledge, anyway?

Re:GPU programming is a nightmare. (-1)

Anonymous Coward | more than 2 years ago | (#39530775)

I don't mean to ruin your false equivalence fest, but sometime people want to earn a living by selling their programs commercially. I know, weird...

Re:GPU programming is a nightmare. (0)

Anonymous Coward | more than 2 years ago | (#39531485)

I don't mean to ruin your false equivalence fest, but sometime people want to earn a living by selling their programs commercially. I know, weird...

And? What does that have to do with anything?

We're talking about reverse engineering here, not piracy. That argument is tired and massively overused yet you still didn't apply it correctly.

Re:GPU programming is a nightmare. (1)

Anonymous Coward | more than 2 years ago | (#39530509)

It's like assembler programming - some people get it, some don't. I've never seen any ultra-high performance computing task where you don't have to manage all the variables you mention. A 10x improvement makes it all worthwhile - some projects get much greater improvements.

Stop complaining that the tools don't let you program a GPU in Java. If you can't take the heat get out of kitchen.

Re:GPU programming is a nightmare. (5, Insightful)

PaladinAlpha (645879) | more than 2 years ago | (#39530653)

Half of our department's research sits directly on CUDA, now, and I haven't really had this experience at all. CUDA is as standard as you can get for NVIDIA architecture -- ditto OpenCL for AMD. The problem with trying to abstract that is the same problem with trying to use something higher-level than C -- you're targeting an accelerator meant to take computational load, not a general-purpose computer. It's very much systems programming.

I'm honestly not really sure how much more abstact you could make it -- memory management is required because it's a fact of the hardware -- the GPU is across a bus and your compiler (or language) doesn't know more about your data semantics than you do. Pipelining and cache management are a fact of life in HPC already, and I haven't seen anything nutso you have to do to support proper instruction flow for nVidia cards (although I've mostly just targeted Fermi).

Re:GPU programming is a nightmare. (2, Informative)

Anonymous Coward | more than 2 years ago | (#39530853)

This is so far detached from reality it almost makes me wonder if it is someone shilling for Intel or another company that wants to defame GPU computing.

First, the claim that it is effective "usually" in PDE solving is absurd; examine the presentations at any HPC conference or the publications in any HPC journal and you will quickly find numerous successful uses in other fields. Off the top of my head, I have seen or worked on successful projects in image processing and computer vision, monte carlo simulations of varying complexity in any number of fields, optimization problems and risk analysis in computational finance, statistical analysis of data from experiments or observations, and (less commonly) in non-numerical-computing applications (some graph algorithms map fairly well to a GPU).

Second, the claim that GPU programming is a pain in the ass -- in general, there is no doubt. It requires more time investment due to the optimization required to get performance that justifies the hardware cost, and far more difficult debugging than a CPU program (particularly a serial CPU program). The rest of your claims here, however, are once again nonsense. You say there is no standardization, but it is not even clear from context where you found standardization lacking -- the hardware? There are new architectures every few years, but they are almost always backwards compatible (ie, they will run old code). The only thing the architecture updates do is expose new features, much like new CPU architectures may add to the instruction set. Or is it software standardization you're looking for? OpenCL and CUDA are both open standards -- one controlled by a very slow moving board of industry representatives, one controlled by a fairly fast moving single company. This is very, very similar to the state of graphics APIs, where OpenGL and DirectX fill similar roles. The idea that they don't support high level languages is questionable -- what defines a high level language? If C or C++ qualify than definitely CUDA and possibly OpenCL do as well. If they don't, then no -- but expecting to write for hardware targeted solely at high performance and scientific computing in Ruby or whatever is idiotic. I would also love evidence (meaning official claims from a manufacturer, not some 3rd party who is just as misinformed as you) of claims to support C++. The closest I can think of is nVidia expanding the support for C++ features (templates, classes, ...) in CUDA -- but I've certainly never seen them or AMD claim to support C++ on the GPU. It sounds like you were expecting to take a C++ program, hit "Compile for GPU", and get massive parallelization for free, which reinforces the idea that you are did not do a shred of research into GPU computing. That you need to link with their library is obvious -- did you expect you would be communicating with the GPU without going through the driver? You're either going to use a library and compiler directly in the build process (CUDA) or indirectly at runtime (OpenCL). That you need to manage the memory is once again idiotic whining -- the people targeted by this are already managing their memory. This isn't to accelerate your shitty Web2.0 application, it's for serious numerical computation of the sort that is almost always written in C++, C, or (ugh) Fortran. That you manage the data pipeline and fetch and cache is at best a half truth; the shared/local memory is similar to a cache, but both AMD and nVidia GPUs use an actual cache that is not controlled at all by the programmer (unless possibly you are modifying the .ptx files for a CUDA kernel? I have not gotten into that sort of thing). The claim about being able to fit a limited amount of code is utter horseshit. And, finally, with the claim about how the code is stored -- this is true for OpenCL typically (there may be obfuscation methods or something along those lines as used by other languages which allow easy retrieval of code -- I'm not sure), but CUDA can be compiled to binary files that are no more easily reverse engineered than something compiled to a binary for your operating system.

By and large, your post reads like you tried GPU computing 5 or 6 years ago when it was done by embedding the computation in OpenGL shaders and using textures for input/output. Perhaps that is your experience, and it explains a lot of your complaints; but if that's the case then you are fairly stupid for believing that the state of the art has not advanced in 5 or 6 years. Alternately, you did little or no research, tried to write some GPU code using modern APIs, failed miserably, and decided to blame it on bad technology because that is easier for you to believe than that you are simply not talented or dedicated enough. To this I can only point out that GPU-backed clusters are increasingly dense in the Top500, and the general consensus in high performance- and scientific-computing circles, both academic and commercial, is that they are hugely beneficial on a wide range of problems.

Finally, it's worth noting here that although this post is exclusively positive with regard to GPUs (being a defense of their use), they are of course not a panacea. They've got a long way (both in terms of software and hardware) to go before your average code monkey will be able to make good use of them, there are (and probably will be, for the foreseeable future) problems that simply do not map well to their memory architecture, and putting similar amounts of effort into writing hand-optimized parallel CPU code can go a long way in closing the performance gap between CPUs and GPUs for many problems.

Re:GPU programming is a nightmare. (4, Interesting)

maccodemonkey (1438585) | more than 2 years ago | (#39531469)

In my experience, GPU programming works exactly like you'd expect it to work. Your nightmare doesn't sound like it's with GPU programming, it sounds like it's with NVidia's marketing.

GPU processors are really small, so everything you've listed here is expected. The code size, variable limits, etc etc. The advantage is you have thousands of them at your disposal. That makes GPUs extremely good when you need to run a kernel with x where x is from 0 to a trillion. Upload the problem set to VRAM, and send the cores to work.

Stuff like C++ and high level languages is also not good for this sort of work. I'm not even sure why people are bothering with C++ on GPGPU to be perfectly frank. Again, you're writing kernels here, not entire programs. C++ is honestly bulky for GPGPU work and I can't imagine what I'd use it for. Both CUDA and OpenCL are already pretty high level, any further past that and you're risking sacrificing performance.

Interpreted code is also good. It's usually JIT compiled for the architecture you're working on. In the case of OpenCL and CUDA, it could be recompiled to run on an ATI card, NVidia card, or local CPU, all of which have different machine languages that you won't know about until runtime.

It sounds like you're angry because GPU programming isn't very much like programming for CPUs, and you'd be right. That's the nature of the hardware, it's built very different and is optimized for different tasks. Whether that's because you were sold a false bill of goods by NVidia, I don't know. But it doesn't mean GPU programming is broken, it just may not be for you. It mostly sounds like you're just trying to cram too much into your individual kernels though.

Re:GPU programming is a nightmare. (0)

Anonymous Coward | more than 2 years ago | (#39532833)

While I agree with the general notion that C++'s bloat on a GPU is a recipe for disappointment, I would like to point out that templates for global functions do actually give CUDA a significant advantage over OpenCL; even in the fairly trivial applications included in the nVidia CUDA SDK there are good examples of when templates can be very beneficial.

Re:GPU programming is a nightmare. (0)

Anonymous Coward | more than 2 years ago | (#39532487)

The fact that this dreck was modded +5 is pretty much a poster boy for how shitty Slashdot's comments sections have become lately. Inflammatory and contrarian nonsense gets modded as Interesting and Insightful regardless of how wildly factually incorrect it is, while comments offering real information struggle to get past +2 or +3.

Bonus: CAPTCHA for this post was "laughed".

Re:GPU programming is a nightmare. (1)

TheRaven64 (641858) | more than 2 years ago | (#39532983)

It doesn't sound like you've done any GPU programming for a few years. These days, OpenCL is pretty well supported. You need some C code on the host for moving data between host and GPU memory and for launching the GPU kernels, but the kernels themselves are written in a dialect of C that is designed for parallel scientific computing.

If you want something even easier, both Pathscale and CAPS International provide C/C++ compilers that support HMPP, so you can easily annotate loops with some pragmas to move them to the GPU and have the compiler automatically generate the code for shuffling data around.

200 dollar android tablets coming? (1)

nurb432 (527695) | more than 2 years ago | (#39530687)

This summer? Odd, i already bought one for 180 that has non glasses 3D.

*yawn*

Server compute? How? (1)

Heretic2 (117767) | more than 2 years ago | (#39531155)

How will they bring GPU compute to servers when they killed their FP64 performance? nVidia and AMD just flip-flipped their GPU-Compute prowess in regards to FP64.

Re:Server compute? How? (0)

Anonymous Coward | more than 2 years ago | (#39531615)

What? The just-released Kepler is the gaming line of the card, not the HPC line. The expectation is that the Tesla version will not be released until late 2012, but will include higher double-precision throughput, ECC, and possibly more cores / higher memory bandwidth. They will likely have a higher TDP than the gaming cards and can thus translate that extra power budget into HPC-relevant performance.

If they started out with the high-end products, then there would be massive yield issues. They learned that during the Fermi generation.

Re:Server compute? How? (0)

Anonymous Coward | more than 2 years ago | (#39532445)

GPU compute

GPU computation.

You wouldn't feed garbage like that to a compiler, so why settle for it when communicating with other humans?

Prices going up??? (1)

uvajed_ekil (914487) | more than 2 years ago | (#39532159)

Nvidia's CEO is also predicting this summer will see the rise of $200 Android tablets.

So Android tablet prices are going up? You can already buy sub-$200 tablets all day long from Amazon, or Big Lots as one example if you prefer a brick and mortar option. Both have some pretty useful ones for $99 right now. They are not iPads, but they are pretty useful for only a c-note. I just saw one with a capacitive screen, Android 3.1, 1GB RAM, 8 GB internal (I think), and an SD card slot, for $99. Not blazingly fast, surely, but fiarly capable and dirt cheap. If you really want a tablet, and you are in the Western world, chances are you can already afford one.

Re:Prices going up??? (1)

Lussarn (105276) | more than 2 years ago | (#39532219)

I think what he says is that there will be more $200 tablets on the market... Not that they will rise in price.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?