×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Announces Avoton Server Architecture and Software Defined Services Effort

Unknown Lamer posted about 9 months ago | from the who-needs-hardware dept.

Intel 41

MojoKid writes "Intel unveiled a number of new data center initiatives this week as part of its broad product strategy to redefine some of its market goals. Santa Clara has begun focusing on finding ways to expand the utility of its low power Atom servers, including the upcoming Avoton Atom products, which are based on the 22nm Bay Trail architecture. Intel isn't just pushing Avoton as as low-power solution that'll compete with products from ARM and AMD, but as the linchpin of a system for software defined networking and software defined storage capabilities. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch. Software defined networking replaces this by using software to manage traffic and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren't Intel, and towards centralized technology Intel can bake into the CPU itself."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

41 comments

But can this send the data to NSA immediately? (0)

Anonymous Coward | about 9 months ago | (#44365751)

It'd be nice if Intel can build a function to send the data immediately to NSA, yes?

Re:But can this send the data to NSA immediately? (1)

symbolset (646467) | about 9 months ago | (#44367827)

You know what would be even cooler? A monthly AI bot that took all of the hot memes in tech and applied them to "prospective Intel tech" that may or not ever appear. That would probably save Intel a billion dollars a year in marketing expense over having it done manually. Seriously Intel, if you're going to do this year after year after year, you may as well automate it and save some money. You're all about automating repetitive stuff, right?

It could enumerate monthly the latest convincing reasons why your latest mobile tech is going to take over the Woooooorld, just like we've been hearing for the last decade. But without the expense of employing actual artists to draw pictures of "this is what an Intel-based mobile world might look like."

Re:But can this send the data to NSA immediately? (1)

symbolset (646467) | about 9 months ago | (#44367857)

Specifically: the human exec who has to stand up and say that hundreds of tablet "design wins" are going to totally kick the iPad's ass would probably rather you did that with an animatronic pseudorep so his future career prospects are not impaired.

BY THE POWER OF GRAYSTOKE !! (0)

Anonymous Coward | about 9 months ago | (#44365823)

Intel has the POWER !!

AMD ?? Not so much !! Really, AMD !! Come on !! If you can't play with the big boys get out !! Get out now !!

Re:BY THE POWER OF GRAYSTOKE !! (0)

Anonymous Coward | about 9 months ago | (#44365929)

skeletor at least has virtualization (pacifica) in EVERY CPU that is x64.
HE-man has 64bit chips with no virtualization tech enabled. nasty stuff.
i can't fault skeletor, because my e-350 is running vmware esxi like a champ : )

Re:BY THE POWER OF GRAYSTOKE !! (1)

PingSpike (947548) | about 9 months ago | (#44369305)

Do you mean vt-d/AMD-Vi (i and d being the important bits)

It does seem AMD hasn't actually market segmented this feature out to the level of near pointlessness like Intel but I can't seem to find good information on which chips do and don't support it. If e-350 does though that is a good sign.

Re:BY THE POWER OF GRAYSTOKE !! (0)

Anonymous Coward | about 9 months ago | (#44370039)

EVERY skeletor/AMD chip that can do x64, can run vmware baremetal.
as you know, bare metal needs the chip extension thingy called "hardware virtualization" (not the I/O one).
i dunno why this is not common knowledge.

Re:BY THE POWER OF GRAYSTOKE !! (1)

unixisc (2429386) | about 8 months ago | (#44373385)

I thought that IBM has the POWER. You know - Performance Optimization With Enhanced RISC. Or the platform formerly known as RS/6000.

Software Defined Services (-1)

Anonymous Coward | about 9 months ago | (#44366037)

SDS is just marketing bullshit. Stop posting this crap here.

Huh? (0)

Anonymous Coward | about 9 months ago | (#44366085)

How can it be software-defined if it's baked into the CPU?

Re:Huh? (0)

Anonymous Coward | about 9 months ago | (#44366629)

its baked into intel cpu which is intels way of creating a walled garden that takes control away from actual working hardware thats non intel... by calling it software its like innovation and stuff

Re:Huh? (0)

Anonymous Coward | about 9 months ago | (#44369413)

but a million blocks or so of an FFPGA on each die.

Avoton lookup avorton (0)

Anonymous Coward | about 9 months ago | (#44366117)

avorton : avorton m (plural avortons)

        abortion (fruit/produce that doesn’t come to maturity)

"Programmable" (1)

oldhack (1037484) | about 9 months ago | (#44366119)

Who are the morons spreading "software-defined" bullshit when there already is a common, well-understood word that perfectly describes the feature?

Re:"Programmable" (1)

vux984 (928602) | about 9 months ago | (#44366205)

Who are the morons spreading "software-defined" bullshit when there already is a common, well-understood word that perfectly describes the feature?

Well, I can have a "software phone" which is a phone-as-app that runs on my desktop. Or I have have a "programmable phone" which describes pretty much any non-trivial office phone one can buy.

I think there is a difference there, don't you?

Re:"Programmable" (1)

oldhack (1037484) | about 9 months ago | (#44366269)

That's a crap analogy, man. Try this: "programmable network" vs. "software-defined network". Which describes better, not to mention easier to roll off your tongue?

Re:"Programmable" (0)

Anonymous Coward | about 9 months ago | (#44366257)

I assure you it is the idiots in marketing.

The Avoton chip is a SoCish core with server features bolted on (like failover, fast I/O, ECC memory etc). It is a chip you can easily make a single CPU IC low power server with, or populate many onto a board for a very compact, high performance, lower power server.

You can build whatever the hell you want with it. The engineers developed a sane product aimed to let you squeeze more out of a data center rack with less money and less power. In a stupider company, they would have killed the program for cannibalizing high margin Xeons, while waiting to get buried by the competition.

The marketing people then try to think up reasons why people might buy them and put them in their products and so come up with SDN nonsense. We just shake our heads and get back to designing the next thing.

Re:"Programmable" (0)

Anonymous Coward | about 9 months ago | (#44366445)

Perhaps they're saying it's possible for software to "program" the logic circuits, similar to how an EEPROM works.

It's been 15 years since I've used one of these devices (in class), so you'll have to forgive me that I've forgotten the proper IEEE-approved name.

Re: "Programmable" (0)

Anonymous Coward | about 9 months ago | (#44366665)

The same marketeers that are using "cloud" when we already have "Internet Servers". It's called: redefining language so we can keep our profit growth above X00%.

I waiting until (0)

Anonymous Coward | about 9 months ago | (#44366133)

We loop back to Service As A Service.

Now that's the real deal!

Whoever figures that one out is gonna make a bundle.

Re:I waiting until (0)

Anonymous Coward | about 9 months ago | (#44367531)

I build my Software As A Service using your Cloud Service As A Service that's running on a Software Defined Network. The beauty is, there is no hardware involved.

Incredibly BAD approach to Networking (2)

hackus (159037) | about 9 months ago | (#44366279)

Seriously, are these people stupid at Intel or haven't they learned anything in the past 20 some years the Net has been around about software and why it is incredibly bad to centralize that sort of control and power in a single entity in infrastructure.

Let alone on a computer network?

There is no possible use for something like this unless you want to centralize power and control over the entire network mesh from one location.

Bad for business, bad for citizens in every country who deploy it, just plain bad idea. I might be inclined to give it a serious thought if I was Alice in Wonderland and could rely on the Chesire Cat to keep things on the up and up.

But you have to be kidding me? We have governments that would destroy the entire internet with this crap as it is right now if they had that sort of control.

Seriously, don't buy that crap and do not contribute to such a open source project.

-Hack

Re:Incredibly BAD approach to Networking (0)

Anonymous Coward | about 9 months ago | (#44366383)

Intel is just responding to the market. Just like "the cloud" is simply about software which manages clusters of VMs, "software defined networking" is just software which manages clusters of firewalls (where "firewall" refers to things like Linux iptables, OpenBSD PF, and the emerging proprietary packet shuffling stacks that companies are selling as appliances using Intel's high-performance NIC zero-copy development kits.) I know the term "firewall" isn't cool anymore, but that's effectively the existing generic term for software-based routing and filtering, thus SDNs are just ways to centralize configuration--including real-time configuration--of clusters of firewalls.

Re:Incredibly BAD approach to Networking (2)

Tailhook (98486) | about 9 months ago | (#44366659)

Creating a ubiquitous network is the first step to placing a government camera in every home. Should the Internet have been precluded to avoid centralization of power and control?

WRT citizens; SDN can only implement policies established by people. The correct approach (for a start) to dealing with these policies is this [defundthensa.com] and this [sopastrike.com] , not banning the tools.

WRT business; you're going to lose the argument hard on this one. One appeal of SDN is cost savings; cheaper hardware, easier management, less net complexity, better asset utilization, etc. There is no possible way you're going to convince anyone that has a budget and competitors (that includes governments, BTW) that they will be better off with systems that cost more. Forget it. 'Security' arguments won't preclude it either.

Our problems stem from huge institutions. The bigger an entity becomes the less it must collaborate; so bad things develop and survive for longer periods without discovery and resistance. Nations with hundreds of millions or billions are the problem. Global corporations are the problem. It is too easy to collude when a few entities have exclusive and global reach and all that is needed undermine liberty or privacy or establish some new rent seeker is a handshake among the masters-of-the-universe. Big systems should require lots of collaboration.

Dismantling big institutions is the solution. Trying to ban inevitabilities like SDN is a waste of time.

Re:Incredibly BAD approach to Networking (1)

Billly Gates (198444) | about 9 months ago | (#44366951)

Of course it is great for intel!

Oh shit these switches are too slow? I guess I have to upgrade them to icore7 extremes etc in 1.5 years instead of waiting a decade to upgrade. It is why Intel makes very shitty integrated graphics. They want casual users to buy icore7 extremes for simple things so they can make more money.

Little do they know there is this thing called competition.

My guess is these will be updated all the time for being slow === more revenue! I hope IT managers with a brain wake up and think because it is from Intel that always has to beat AMD or non intel chipsets.

Re:Incredibly BAD approach to Networking (1)

Skapare (16644) | about 9 months ago | (#44367227)

Did anyone say it's good networking? No! This is all about rerouting MONEY to Intel (data going to NSA is a side effect).

Re:Incredibly BAD approach to Networking (1)

tbonefrog (739501) | about 9 months ago | (#44368851)

SDN makes hacking and covering tracks so, so, so, much more potent, quicker, and easier. Now you don't just have the NSA to be afraid of. As with the entire history of the Internet, they will not worry about security until their baby has grown into a giant, and then they will attempt to tack some kind of loincloth on it and declare it secure.

SDN (1, Interesting)

cosm (1072588) | about 9 months ago | (#44366379)

SDN can suck it. As a guy that lives in the trenches, between lags, mstp, vlan routing, vrrp/hsrp, trill, and now big routing protocols showing up in the datacenter (think ospf/bgp) and a motley crew of various other l2 and up protocols, we have enough decentralized means for corralling bits to their regularly scheduled programs. SDN is just big content's wet dream, or network odm's looking to get in on the 'app' craze.

No QPI and only 16 pcie v2? lains (1)

Joe_Dragon (2206452) | about 9 months ago | (#44366381)

adding 10GBE and an raid card / pcie based ssd can eat up all of pcie lains fast. Even more if they try to jam an video chip and TB on there as well.

No QPU kills more then 1 socket. Also not listed is the number of ram channels.

Re:No QPI and only 16 pcie v2? lains (0)

Anonymous Coward | about 9 months ago | (#44367161)

not sure why this got up-modded. this is a single-core system. what would qpi attach to?
also, the pcie-2 matches the pci capabilities of the fulcrum openflow switches that intel sell.
that's just for management.

A whole new level of indirection (3, Insightful)

WaffleMonster (969671) | about 9 months ago | (#44366509)

When I think about management problems we have today they are almost entirely caused by unaddressed suckage in various layers of existing stack. Rather than fixing underlying problem people insist on adding new layers of complexity to workaround them.

It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.

Next we have capwap/openflow complex specifications intended to normalize configuration of all your network things. A lot of this is caused by IT chasing architectural fallacies such as "network security" and "redundancy". Layers upon layers of IDS, firewall and god knows what to "secure the network". The very concept of things like "internal network" or load balancers used for application redundancy are flawed, stupid and dangerous. What part of "insider threat" do people not understand?

Routers should be stupid devices which punt packets between interfaces. The error is placing complexity where it does not belong and then go have to go mask the repercussions of a poor choice with SDN cuz otherwise it is all just too hard to manage.

What would happen if for example rather than an expensive load balancer for a web farm browsers simply implemented a hueristic to pull multiple IPs out of DNS and use a delay timer to make make multiple connection attempts with term memory of failed requests and RTT feedback. You could effectivly mask a failure in the string group with little to no noticable delay until the failed system can be repaired or yanked from DNS.

The most detremental error I see repeated constantly is this notion the data tier, the network or the operating system is somehow responsible for the lack of scalability or availability of an application. This is fundementally bullshit. Systems must be DESIGNED to scale. Smoke and magic by a vendor only delay or mask underlying problems. We need smarter software not software defined gimmicks.

Re:A whole new level of indirection (0)

Anonymous Coward | about 9 months ago | (#44366731)

I agree that software defined networking is a fallacy. The largest flaw in that sort of design is the full lack of control and understanding of the underlying architecture.

However, I must disagree with you in handling the solution to network and scalability problems within the software stack of the requesting agent The concept of Firewalls, routers, ids etc is sound if designed in a proper scalable context. The issue that is currently being faced is that people are under for lack of a better term "spell" around the term "cloud" and "SDN" fits real well in that "cloudy" thought.

Everyone wants to offload the complexities to a psuedo network of smart nodes that make decisions magically versus a actual understanding of the problem. This is shown with Amazon which is basically SDN and we know how well that works when actual network latency and IOPs matter. The irony is that all that ends up happening is adding another layer to emulate something that existing networks already do.

To take the web site request example you presented. Load balancers are required to handle immediate failures of the backend infrastructure and create a farm of nodes to process those requests. That load balancer layer is then offset into a caching farm layer to return as quickly as possible requests and eliminate load from the actual web server layer. Which in turn speaks to load balancer to hit the DB and other specialized layers. All of those aspects are handled in a properly designed architecture using a pretty basic configuration of VIPs and health checks to ensure the environment works, scales, and more importantly handles failure.

The goal in any sort of really scalable design is a harmony from both the development group and infrastructure group. The development group shouldn't need to know about the infrastructure layers and redundancy they need to know that if they point to the IP or FQDN and magically it works and the DB passes back the data. The infrastructure group in turn should be working towards simplifying the access of the developers to perform their jobs.

Adding this other layer of SDN is just another layer of complexity and is basically just another step to not understanding how anything works. I think you would agree with me, it's another one of those things that's actually meant to remove capability and customization from the user and provide it to the vendor to shape what your able to do in your own network.

Re:A whole new level of indirection (0)

Anonymous Coward | about 9 months ago | (#44366769)

you should read the open stack documentation. open stack *replaces*
spanning tree, vlans and routing with a flow-based architecture. if anything,
it's easier to think about a 1-layer network, than spanning-tree plus broadcast
domains mashed up with vlans and routing.

Re:A whole new level of indirection (0)

Anonymous Coward | about 9 months ago | (#44369145)

Yay, reinventing infiniband badly.

Re:A whole new level of indirection (1)

AK Marc (707885) | about 9 months ago | (#44366771)

It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.

I've not seen that. I saw clustering for redundancy getting a foot-hold in the late 1990s. Then people would cluster for reliability, and have more processing than they needed, but rather than adding services, which could impact reliability, why not have multiple computers on multiple machines, with more redundancy and almost the same power as a 1:1. The bonus being you could make sub-computers (DNS servers dedicated to DNS/DHCP with OS minimums of resources for their tiny load, remember I'm talking corporate, not ISP). So then it became known as ways to make super-servers. Get a chassis system, and you'd link 8 servers, 4 CPUs each, for 128+core performance, and a directly attached 16 TB array. Rarely these days do I see a shitload of images on a single computer unless it's for an under-100 person company with a one-person IT department who likes VMs or wants to build a resume.

In fact, SDN is an "old" concept that was deployed in VMs first, where a pile of computers would hang off a single port.

A piece of the pie (1)

Rob_Bryerton (606093) | about 9 months ago | (#44367409)

So, here's some random, somewhat connected ideas. This is a long winded post, but please bear with me. First, take a look look at the buzzwords, and you can tell where the money will be flowing. Several years ago, the big thing was "Green" right? Then came "Big Data", and the last 2 or 3 years have been all "Cloud". Now if you've been paying attention, this year's buzzword is "Software Defined $TECHNOLOGY", which of course was kicked off with "Software Defined Networking" (SDN).

This is my notion of what we'll be seeing tons and tons of this year and for the next couple of years until the Next Big Thing hits, and if you move fast, you've got a great chance to get in on the ground floor, so to speak, and sell a shit-ton of product to big, dumb enterprise. Here's the premise: Think of a general area of technology, and try to apply the VMware approach to it; that is, decouple, generalize, pool resources, and rename your tech "Software Defined $TECHNOLOGY". Gold mine.

So, among other players and products, VMware took x86 virtualization from an obscure tech to an every day item that most of us use on a daily basis, whether you're an SA, developer, whatever. If you work in a medium to large shop, you're using VMware. Great (but vastly overrated) product, great timing, these guys really shook things up. So take that same method, that shim (hypervisor) that slipped between the hardware and the OS allowing us to do all kinds of cool stuff: consolidation, live workload migrations, live storage migrations. Take that methodology, that concept, and apply it do different parts of the stack.

Storage: this has been going on in the mid-range and enterprise storage space for several years already. In the past we'd have an array w/hundreds of drives which we'd group into RAID groups, the type and sizes dictated by workload (space, IOP requirements, etc), then from our RAID groups we'd carve the LUNs and present them to the servers. The only issues with this were those of flexibility after the LUNs were presented and what do you know, you need more space, IO, whatever, then you need to do a LUN migration to bigger/faster. Or a more common case is the people asking for disk don't know what they really need, so they way overshoot the space/IO req's and you deliver way more than they end up using, and you just wasted tons of premium disk.

So the 'fix' the storage vendors came up with is the first small step into virtualizing storage: instead of rigid RAID groups and having to have a somewhat knowledgeable storage guy on staff, the storage vendors came up with storage pools. You take a hundred or a few hundred spindles and toss 'em into the pool, and carve LUNs from there. It's like this monstrous RAID group that can absorb 20,000 IOPs (or some crazy number depending on how you slice and dice your array.) Instead of RAID groups dedicated to each application, you now have all of your low-end and mid range applications sharing the same storage pool. Sort of how like you can run 100 OS instances on a small VMware setup. Instead of buying 100 servers, you buy 4 or 5 and dump all of those Windows boxes that that average less than 1% utilization into the VMware cluster, and save a mint on hardware and data center rack space, right? Same idea w/the storage pools. The only problem is you're still dealing with the same vendors and buying tons of overpriced spindles; not as many, but still a lot and you're paying dearly for it.

So the next step is to get away from the arrays that cost $100k's/$megabucks, mix in some scale out mojo (GlusterFS, whatever), add a dash of pooling, use off the shelf servers stuffed w/cheap disk perhaps fronted with some SSD/flash, and voila: "Software Defined Storage". You heard it here first! Or maybe not, you probably thought of this as well. (I was daydreaming this stuff a while ago, and the next day went to a local VMware conference, and no shit, an EMC guy had this same idea and terminology on one of his PowerPoint slides.)

OK, so that's storage. What else can we do. Well, networking is ripe for a shake up and virtualization job but virtualization is a boring name, so let's call it "Software Defined Networking". I'll leave this one as an exercise for the reader, but one thing is certain: Cisco has got to go. While their high-high end hardware and protocols might be untouchable, their mid-range and low end stuff is ripe for the picking and they will *not* be the ones who ultimately benefit from SDN, because their software just plain sucks. And they're to big, fat, slow and expensive to move on this...

TL;DR: Virtualization led they way but the name is stale. Take the same concept, apply to storage and networks, and we have SDS & SDN: Software Defined Storage and Software Defined Networking. These are the buzzwords for 2013-2014.

Re:A piece of the pie (0)

Anonymous Coward | about 9 months ago | (#44371133)

I thought the term was taken from Software Defined Radio, where you use SDR to do all sorts of interesting things with radio.

GPS? yes
TV? yes
LTE? yes
RADAR? Yes with clever antenna design
Radio Telescope? why not.
Ham? duh.

the whole low end of the EM spectrum is your playground.

I can't help myself, sorry (0)

Anonymous Coward | about 9 months ago | (#44368727)

As a Finn the name is about to be twisted pun here.

Just two most obvious,

s/Av/Aiv/g -> brainless
s/Av/Arv/g -> worthless

and I'm sure these aren't worst someone will come out.

Heh Heh.. Heh heh (0)

Anonymous Coward | about 9 months ago | (#44369303)

Did he say .. "Voltron"

Where have I seen this before..... (0)

Anonymous Coward | about 8 months ago | (#44374099)

Oh, that's right.....the old phone system that we have largely replaced with DISTRIBUTED routing & switching infrastructure. Funny how old ideas come around as "new" ideas.

Reading all the posts here what gives? (1)

deviated_prevert (1146403) | about 9 months ago | (#44454819)

From what I understand the whole purpose of what Intel [hp.com] is doing is along the same lines as the HP Moonshot hardware design. READ CAREFULLY WHAT INTEL IS DOING WITH HP AND WHY

How the hell did the discussion suddenly get side tracked into blaming Intel and the hardware manufacturers for creating software security issues? But lately any post about hardware that is not 100 percent Microsoft friendly seems to get slagged by idiots.

The highest rated posts are essentially rants, not a whisper about why going along with Intel and essentially HP's modular hardware and software initiative has the potential to reduce costs and make securing and maintaining complex diverse networks easier.

SoCs in servers with a flexible software setup are what is coming guys the days of putting add on cards and software drivers for servers is limited. WHAT YOU do with the system and how you run it is the security system not THE OS or the hardware. Essentially Intel and many others are telling Microsoft to take a flying f++k with the security on chip garbage. AND IT IS ABOUT FRIGGIN' TIME

Within ten years it will be standard that plug in SoCs cards populate servers. The bandwidth problems are no longer an issue.

Intel and HP have it right this time moving away from the old mind set that you can only configure a server selling only proprietary closed source software drivers and closed operating systems that you pay extra depending on node scale size. Linux is poised to kick Microsoft's royal ass this time around. And it is largely due to what Microsoft has done to the industry by insisting on a piece of the processor number pie. Not to mention bullying the shit out of everybody in the consumer market with their hair brained locked boot loaders and data execution locks.

This is why Intel and HP are partnering with everybody but Microsoft on these chips and systems. Having to run closed MICROSOFT code and closed driver binaries has been the security problem with 64 bit servers and anyone with a brain realizes this.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...