We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
Data Center With a Brain: Google Using Machine Learning In Server Farms
ML has already been proposed to improve the performance and resource efficiency of large-scale datacenters.
Detailed information on two of the most well-known examples from Stanford and Berkeley can be found below:
Stanford Researchers Improve Efficiency of Cloud Computing
tabate (3593817) writes "Stanford researchers Christina Delimitrou and Christos Kozyrakis have developed a cluster management system called Quasar that significantly boosts datacenter utilization. Current utilization is notoriously low, primarily because users that submit applications to these systems are extremely conservative when it comes to reserving resources, often overprovisioning reservations by an order of magnitude or more to side-step performance unpredictability issues. Quasar takes a different approach which requires the user to specify a performance target for an application (e.g., 99th percentile latency) and translates that into resources using an approach similar to online recommendation systems like Netflix or Amazon's. By projecting a small profiling signal for a new workload against the massive dataset assembled from previously-scheduled applications, Quasar can find similarities between resource preferences of seemingly non-similar workloads. This way resource allocations are sized much more tightly to just meet the performance requirements of incoming workloads and as a result datacenter utilization improves without sacrificing application performance."
tabate has no journal entries.