We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
itwbennett writes According to a report in Korean IT Times, Samsung Electronics has begun production of the A9 processor, the next generation ARM-based CPU for iPhone and iPad. Korea IT Times says Samsung has production lines capable of FinFET process production (a cutting-edge design for semiconductors that many other manufacturers, including AMD, IBM and TSMC, are adopting) in Austin, Texas and Giheung, Korea, but production is only taking place in Austin. Samsung invested $3.9 billion in that plant specifically to make chips for Apple. So now Apple can say its CPU is "Made in America."
114 comments | about a week ago
MojoKid writes: AMD just dropped its new Catalyst Omega driver package that is the culmination of six months of development work. AMD Catalyst Omega reportedly brings over 20 new features and a wealth of bug fixes to the table, along with performance increases both on AMD Radeon GPUs and integrated AMD APUs. Some of the new functionality includes Virtual Super Resolution, or VSR. VSR is "game- and engine-agnostic" and renders content at up to 4K resolution, then displays it at a resolution that your monitor actually supports. AMD says VSR allows for increased image quality, similar in concept to Super Sampling Anti-Aliasing (SSAA). Another added perk of VSR is the ability to see more content on the screen at once. To take advantage of VSR, you'll need a Radeon R9 295X2, R9 290X, R9 290, or R9 285 discrete graphics card. Both single- and multi-GPU configurations are currently supported. VSR is essentially AMD's answer to NVIDIA's DSR, or Dynamic Super Resolution. In addition, AMD is claiming performance enhancements in a number of top titles with these these new drivers. Reportedly, as little as 6 percent improvement in performance in FIFA Online to as much as a 29 percent increase in Batman: Arkham Origins can be gained when using an AMD 7000-Series APU, for example. On discrete GPUs, an AMD Radeon R9 290X's performance increases ranged from 8 percent in Grid 2 to roughly 16 percent in Bioshock Infinity.
73 comments | about two weeks ago
MojoKid writes To say that BioWare has something to prove with Dragon Age: Inquisition is an understatement. The first Dragon Age: Origins was a colossal, sprawling, unabashed throwback to classic RPGs. Conversely, Dragon Age: Inquisition doesn't just tell an epic story, it evolves in a way that leaves you, as the Inquisitor, leading an army. Creating that sense of scope required a fundamentally different approach to gameplay. Neither Dragon Origins or Dragon Age 2 had a true "open" world in the sense that Skyrim is an open world. Instead, players clicked on a location and auto-traveled across the map from Point A to Point B. Thus, a village might be contained within a single map, while a major city might have 10-12 different locations to explore. Inquisition keeps the concept of maps as opposed to a completely open world, but it blows those maps up to gargantuan sizes. Instead of simply consisting of a single town or a bit of wilderness, the new maps in Dragon Age: Inquisition are chock-full of areas to explore, side quests, crafting materials to gather, and caves, dungeons, mountain peaks, flowing rivers, and roving bands of monsters. And Inquisition doesn't forget the small stuff — the companion quests, the fleshed-out NPCs, or the rich storytelling — it just seeks to put those events in a much larger context across a broad geographical area. Dragon Age: Inquisition is one of the best RPGs to come along in a long time. Never has a game tried to straddle both the large-scale, 10,000-foot master plan and the small-scale, intimate adventure and hit both so well. In terms of graphics performance, you might be surprised to learn that a Radeon R9 290X has better frame delivery than a GeForce GTX 980, despite the similarity in the overall frame rate. The worst frame time for an Radeon R9 290X is just 38.5ms or 26 FPS while a GeForce GTX 980 is at 46.7ms or 21 FPS. AMD takes home an overall win in Dragon Age: Inquisition currently, though Mantle support isn't really ready for prime time. In related news, hypnosec sends word that Chinese hackers claim to have cracked Denuvo DRM, the anti-piracy solution for Dragon Age: Inquisition. A Chinese hacker group has claimed that they have managed to crack Denuvo DRM — the latest anti-piracy measure to protect PC games from piracy. Introduced for the first time in FIFA 15 for PC, the Denuvo anti-piracy solution managed to keep the FIFA 15 uncracked for 2 months and Dragon Age Inquisition for a month. However, Chinese hackers claim that they have managed to rip open the DRM after fifteen days of work. The hackers have uploaded a video to prove their accomplishment. A couple of things need to be pointed out here. First,the Chinese team has merely cracked the DRM and this doesn't necessarily mean that there are working cracks out there. Also, the crack only works with Windows 7 64-bit systems and won't work on Windows 8 or Windows 7 32-bit systems for now. The team is currently working to collect hardware data on processor identification codes.
91 comments | about two weeks ago
jones_supa writes We all are aware of various chirping and whining sounds that electronics can produce. Modern graphics cards often suffer from these kind of problems in form of coil whine. But how widespread is it really? Hardware Canucks put 50 new graphics cards side-by-side to compare them solely from the perspective of subjective acoustic disturbance. NVIDIA's reference platforms tended to be quite well behaved, just like their board partners' custom designs. The same can't be said about AMD since their reference R9 290X and R9 290 should be avoided if you're at all concerned about squealing or any other odd noise a GPU can make. However the custom Radeon-branded SKUs should usually be a safe choice. While the amount and intensity of coil whine largely seems to boil down to luck of the draw, at least most board partners are quite friendly regarding their return policies concerning it.
111 comments | about a month ago
MojoKid (1002251) writes "Life is hard when you're a AAA publisher. Last month, Ubisoft blamed weak console hardware for the troubles it had bringing Assassin's Creed Unity up to speed, claiming that it could've hit 100 FPS but for weak console CPUs. Now, in the wake of the game's disastrous launch, the company has changed tactics — suddenly, all of this is AMD's fault. An official company forum post currently reads: "We are aware that the graphics performance of Assassin's Creed Unity on PC may be adversely affected by certain AMD CPU and GPU configurations. This should not affect the vast majority of PC players, but rest assured that AMD and Ubisoft are continuing to work together closely to resolve the issue, and will provide more information as soon as it is available." There are multiple problems with this assessment. First, there's no equivalent Nvidia-centric post on the main forum, and no mention of the fact that if you own an Nvidia card of any vintage but a GTX 970 or 980, you're going to see less-than ideal performance. According to sources, the problem with Assassin's Creed Unity is that the game is issuing tens of thousands of draw calls — up to 50,000 and beyond, in some cases. This is precisely the kind of operation that Mantle and DirectX 12 are designed to handle, but DirectX 11, even 11.2, isn't capable of efficiently processing that many calls at once. It's a fundamental limit of the API and it kicks in harshly in ways that adding more CPU cores simply can't help with.
262 comments | about a month ago
MojoKid writes It has been over six years since Intel first unveiled its Atom CPUs and detailed its plans for new, ultra-mobile devices. The company's efforts to break into smartphone and tablet sales, while turning a profit, have largely come to naught. Nonetheless, company CEO Brian Krzanich remains optimistic. Speaking to reporters recently, Krzanich opined that the company's new manufacturing partners like Rockchip and Spreadtrum would convert entirely to Intel architectures within the next few years. Krzanich has argued that with Qualcomm and MediaTek dominating the market, it's going to be tougher and tougher for little guys like Rockchip and Spreadtrum to compete in the same spaces. There's truth to that argument, to be sure, but Intel's ability to offer a competitive alternative is unproven. According to a report from JP Morgan, Intel's cost-per-wafer is currently estimated as equivalent to TSMC's average selling price per wafer — meaning TSMC is making money well below Intel's break-even. Today, Intel is unquestionably capable of building tablet processors that offer a good overall experience but the question of what defines a "good" experience is measured in its similarity to ARM. It's hard to imagine that Intel wants to build market share as an invisible partner, but in order to fundamentally change the way people think about Intel hardware in tablets and smartphones, it needs to go beyond simply being "as good" and break into territory that leaves people asking: "Is the ARM core just as good as the Intel chip?"
91 comments | about a month ago
MojoKid writes Dell's Alienware division recently released a radical redesign of their Area-51 gaming desktop. With 45-degree angled front and rear face plates that are designed to direct control and IO up toward the user, in addition to better directing cool airflow in, while warm airflow is directed up and away from the rear of the chassis, this triangular-shaped machine grabs your attention right away. In testing and benchmarks, the Area-51's new design enables top-end performance with thermal and acoustic profiles that are fairly impressive versus most high-end gaming PC systems. The chassis design is also pretty clean, modular and easily servicable. Base system pricing isn't too bad, starting at $1699 with the ability to dial things way up to an 8-core Haswell-E chip and triple GPU graphics from NVIDIA and AMD. The test system reviewed at HotHardware was powered by a six-core Core i7-5930K chip and three GeForce GTX 980 cards in SLI. As expected, it ripped through the benchmarks, though the price as configured and tested is significantly higher.
138 comments | about 2 months ago
An anonymous reader writes: AMD recently presented plans to unify their open-source and Catalyst Linux drivers at the open source XDC2014 conference in France. NVIDIA's rebuttal presentation focused on support Mir and Wayland on Linux. The next-generation display stacks are competing to succeed the X.Org Server. NVIDIA is partially refactoring their Linux graphics driver to support EGL outside of X11, to propose new EGL extensions for better driver interoperability with Wayland/Mir, and to support the KMS APIs by their driver. NVIDIA's binary driver will support the KMS APIs/ioctls but will be using their own implementation of kernel mode-setting. The EGL improvements are said to land in their closed-source driver this autumn while the other changes probably won't be seen until next year.
80 comments | about 2 months ago
MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.
338 comments | about 2 months ago
An anonymous reader writes: AMD is moving forward with their plans to develop a new open-source Linux driver model for their Radeon and FirePro graphics processors. Their unified Linux driver model is moving forward, albeit slightly different compared to what was planned early this year. They're now developing a new "AMDGPU" kernel driver to power both the open and closed-source graphics components. This new driver model will also only apply to future generations of AMD GPUs. Catalyst is not being open-sourced, but will be a self-contained user-space blob, and the DRM/libdrm/DDX components will be open-source and shared. This new model is more open-source friendly, places greater emphasis on their mainline kernel driver, and should help Catalyst support Mir and Wayland.
56 comments | about 2 months ago
An anonymous reader writes The Linux 3.17 kernel was officially released today. Linux 3.17 presents a number of new features that include working open-source AMD Hawaii GPU support, an Xbox One controller driver, free-fall support for Toshiba laptops, numerous ARM updates, and other changes.
114 comments | about 2 months ago
An anonymous reader writes Counter-Strike: Global Offensive has finally been released for Linux two years after its Windows debut. The game is reported to work even on the open-source Intel Linux graphics drivers, but your mileage may vary. When it comes to the AMD and NVIDIA drivers, NVIDIA continues dominating for Linux gaming over AMD with Catalyst where there's still performance levels and other OpenGL issues.
93 comments | about 3 months ago
MojoKid (1002251) writes NVIDIA has launched two new high-end graphics cards based on their latest Maxwell architecture. The GeForce GTX 980 and GTX 970 are based on Maxwell and replace NVIDIA's current high-end offerings, the GeForce GTX 780 Ti, GTX 780, and GTX 770. NVIDIA's GeForce GTX 980 and GTX 970 are somewhat similar as the cards share the same 4GB frame buffer and GM204 GPU, but the GTX 970's GPU is clocked a bit lower and features fewer active Streaming Multiprocessors and CUDA cores. The GeForce GTX 980's GM204 GPU has all of its functional blocks enabled. The fully-loaded GeForce GTX 980 GM204 GPU has a base clock of 1126MHz and a Boost clock of 1216MHz. The GTX 970 clocks in with a base clock of 1050MHz and Boost clock of 1178MHz. The 4GB of video memory on both cards is clocked at a blisteringly-fast 7GHz (effective GDDR5 data rate). NVIDIA was able to optimize the GM204's power efficiency, however, by tweaking virtually every part of the GPU. NVIDIA claims that Maxwell SMs (Streaming Multiprocessors) offer double the performance of GK104 and double the perf per watt as well. NVIDIA has also added support for new features, namely Dynamic Super Resolution (DSR), Multi-Frame Sampled Anti-Aliasing (MFAA), and Voxel Global Illumination (VXGI). Performance-wise, the GeForce GTX 980 is the fastest single-GPU powered graphics card ever tested. The GeForce GTX 970 isn't as dominant overall, but its performance was impressive nonetheless. The GeForce GTX 970 typically performed about on par with a GeForce GTX Titan and traded blows with the Radeon R9 290X.
125 comments | about 3 months ago
Vigile (99919) writes AMD looks to continue addressing the mainstream PC enthusiast and gamer with a set of releases into two different component categories. First, today marks the launch of the Radeon R9 285 graphics card, a $250 option based on a brand new piece of silicon dubbed Tonga. This GPU has nearly identical performance to the R9 280 that came before it, but includes support for XDMA PCIe CrossFire, TrueAudio DSP technology and is FreeSync capable (AMD's response to NVIDIA G-Sync). On the CPU side AMD has refreshed its FX product line with three new models (FX-8370, FX-8370e and FX-8320e) with lower TDPs and supposedly better efficiency. The problem of course is that while Intel is already sampling 14nm parts these Vishera-based CPUs continue to be manufactured on GlobalFoundries' 32nm process. The result is less than expected performance boosts and efficiency gains. For a similar review of the new card, see Hot Hardware's page-by-page unpacking.
98 comments | about 4 months ago
New submitter nrjperera (2669521) submits news of a new laptop from HP that's in Chromebook (or, a few years ago, "netbook") territory, price-wise, but loaded with Windows 8.1 instead. Microsoft has teamed up with HP to make an affordable Windows laptop to beat Google Chromebooks at their own game. German website Mobile Geeks have found some leaked information about this upcoming HP laptop dubbed Stream 14, including its specifications. According to the leaked data sheet the HP Stream 14 laptop will share similar specs to HP's cheap Chromebook. It will be shipped with an AMD A4 Micro processor, 2GB of RAM, 32GB of flash storage and a display with 1,366 x 768 screen resolution. Microsoft will likely offer 100GB of OneDrive cloud storage with the device to balance the limited storage option.
215 comments | about 4 months ago
MojoKid (1002251) writes AMD is launching a new family of products today, but unless you follow the rumor mill closely, it's probably not something you'd expect. It's not a new CPU, APU, or GPU. Today, AMD is launching its first line of solid state drives (SSDs), targeted squarely at AMD enthusiasts. AMD is calling the new family of drives, the Radeon R7 Series SSD, similar to its popular mid-range line of graphics cards. The new Radeon R7 Series SSDs feature OCZ and Toshiba technology, but with a proprietary firmware geared towards write performance and high endurance. Open up one of AMD's new SSDs and you'll see OCZ's Indilinx Barefoot 3 M00 controller on board—the same controller used in the OCZ Vector 150, though it is clocked higher in these drives. That controller is paired to A19nm Toshiba MLC (Multi-Level Cell) NAND flash memory and a DDR3-1333MHz DRAM cache. The 120GB and 240GB drives sport 512MB of cache memory, while the 480GB model will be outfitted with 1GB. Interestingly enough, AMD Radeon R7 Series SSDs are some of the all-around, highest-performing SATA SSDs tested to date. IOPS performance is among the best seen in a consumer-class SSD, write throughput and access times are highly-competitive across the board, and the drive offered consistent performance regardless of the data type being transferred. Read performance is also strong, though not quite as stand-out as write performance.
64 comments | about 4 months ago
Lucas123 writes An AMD website in China has leaked information about the upcoming release of a line of SSDs aimed at gamers and professionals that will offer top sequential read/write speeds of 550MB/s and 530MB/s, respectively. AMD confirmed the upcoming news, but no pricing was available yet. The SSDs will come in 120GB, 240GB and 480GB capacities and will use Toshiba's 19-nanometer flash lithography technology. According to IHS, AMD is likely entering the gaming SSD market because desktop SSD shipments are expected to experience a 39% CAGR between now and 2018.
110 comments | about 4 months ago
MojoKid (1002251) writes "AMD updated its family of Kaveri-based A-Series APUs for desktop systems recently, namely the A10-7800 and the A6-7400K. The A10-7800 has 12 total compute cores, 4 CPU and 8 GPU cores, with average and maximum turbo clock speeds of 3.5GHz and 3.9GHz, respectively. The A6-7400K arrives with 6 total cores (2CPU, 4 GPU) and with the same clock frequencies. ... The AMD A10-7800 APU's performance is somewhat mixed, though it is a decent performer overall. Its Steamroller-based CPU cores do not do much to make up ground versus Intel's processors, so in the more CPU-bound workloads, Intel's dual-core Core i3-4330 competes favorably to AMD's quad-cores. And in terms of IPC and single-thread performance Intel maintains a big lead. Factor graphics into the equation, however, and the tides turn completely. The GCN-based graphics engine in Kaveri is a major step-up over the previous-gen, and much more powerful than Intel's mainstream offerings. The A10-7800's power consumption characteristics are also more desirable versus the Richland-based A10-6800K."
117 comments | about 5 months ago
Dputiger (561114) writes "It has been almost two years since AMD launched the FirePro W9000 and kicked off a heated battle in the workstation GPU wars with NVIDIA. AMD recently released the powerful FirePro W9100, however, a new card based on the same Hawaii-class GPU as the desktop R9 290X, but aimed at the professional workstation market. The W9100's GPU features 2,816 stream processors, and the card boasts 320GB/s of memory bandwidth, and six mini-DisplayPorts, all of which support DP1.2 and 4K output. The W9100 carries more RAM than any other AMD GPU as well, a whopping 16GB of GDDR5 on a single card. Even NVIDIA's top-end Quadro K6000 tops out at 12GB, which means AMD sits in a class by itself in this area. In terms of performance, this review shows that the FirePro W9100 doesn't always outshine its competition, but its price/performance ratio keep it firmly in the running. But if AMD continues to improve its product mix and overall software support, it should close the gap even more in the pro GPU market in the next 18-24 months."
42 comments | about 5 months ago
MojoKid writes: "When NSA whistleblower Edward Snowden came forth last year with U.S. government spying secrets, it didn't take long to realize that some of the information revealed could bring on serious repercussions — not just for the U.S. government, but also for U.S.-based companies. The latest to feel the hit? None other than Apple, and in a region the company has been working hard to increase market share: China. China, via state media, has today declared that Apple's iPhone is a threat to national security — all because of its thorough tracking capabilities. It has the ability to keep track of user locations, and to the country, this could potentially reveal "state secrets" somehow. It's being noted that the iPhone will continue to track the user to some extent even if the overall feature is disabled. China's iPhone ousting comes hot on the heels of Russia's industry and trade deeming AMD and Intel processors to be untrustworthy. The nation will instead be building its own ARM-based "Baikal" processor.
143 comments | about 5 months ago