×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Getting Students To Think At Internet Scale

kdawson posted more than 4 years ago | from the peta-here-a-peta-there dept.

Education 98

Hugh Pickens writes "The NY Times reports that researchers and workers in fields as diverse as biotechnology, astronomy, and computer science will soon find themselves overwhelmed with information — so the next generation of computer scientists will have to learn think in terms of Internet scale of petabytes of data. For the most part, university students have used rather modest computing systems to support their studies, but these machines fail to churn through enough data to really challenge and train young minds to ponder the mega-scale problems of tomorrow. 'If they imprint on these small systems, that becomes their frame of reference and what they're always thinking about,' said Jim Spohrer, a director at IBM's Almaden Research Center. This year, the National Science Foundation funded 14 universities that want to teach their students how to grapple with big data questions. Students are beginning to work with data sets like the Large Synoptic Survey Telescope, the largest public data set in the world. The telescope takes detailed images of large chunks of the sky and produces about 30 terabytes of data each night. 'Science these days has basically turned into a data-management problem,' says Jimmy Lin, an associate professor at the University of Maryland."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

98 comments

Data management problem (5, Insightful)

razvan784 (1389375) | more than 4 years ago | (#29729651)

Science has always been about extracting knowledge from thoughtfully-generated and -processed data. Managing enormous datasets is not science per se, it's computer engineering. It's useless to say 'hey I'm processing 30 TB' if you're processing them wrong. Scientific method and principles are what count, and they don't change.

Re:Data management problem (0, Interesting)

Anonymous Coward | more than 4 years ago | (#29729715)

The article doesn't convince me there's any need to "think at internet scale." Whether processing 100MB or 100 petabytes the process would be the same.

Re:Data management problem (2, Insightful)

Trepidity (597) | more than 4 years ago | (#29729821)

I agree, and don't think it's anywhere near the science/CS-education bottleneck either. It's true that it can be useful to work with some non-trivial data even in relatively early education: sifting through a few thousand records for patterns, testing hypotheses on them, etc., can lead to a way of thinking about problems that is hard to get if you're working only toy examples of 5 data points or something. But I think there's very little of core science education that needs to be done at "internet-scale". If we had a generation of students who solidly grasped the foundations of the scientific method, of computing, of statistics, of data-processing, etc., but their only flaw was that they were used to processing data on the orders of a few megabytes, and needed to learn how to scale up bigger--- well that'd be a good problem for us to have.

Apart from very specific knowledge, like actually studying scaling properties of algorithms to very-large data sets, I don't see much core science education even benefiting from huge data sets. If your focus in a class isn't on scalability of algorithms, but on something else, is there any reason to make students deal with an unwieldy 30 TB of data? Even "real" scientists often do their exploratory work on a subset of the full data set.

Re:Data management problem (2, Interesting)

Enter the Shoggoth (1362079) | more than 4 years ago | (#29730339)

I agree, and don't think it's anywhere near the science/CS-education bottleneck either. It's true that it can be useful to work with some non-trivial data even in relatively early education: sifting through a few thousand records for patterns, testing hypotheses on them, etc., can lead to a way of thinking about problems that is hard to get if you're working only toy examples of 5 data points or something. But I think there's very little of core science education that needs to be done at "internet-scale". If we had a generation of students who solidly grasped the foundations of the scientific method, of computing, of statistics, of data-processing, etc., but their only flaw was that they were used to processing data on the orders of a few megabytes, and needed to learn how to scale up bigger--- well that'd be a good problem for us to have.

Apart from very specific knowledge, like actually studying scaling properties of algorithms to very-large data sets, I don't see much core science education even benefiting from huge data sets. If your focus in a class isn't on scalability of algorithms, but on something else, is there any reason to make students deal with an unwieldy 30 TB of data? Even "real" scientists often do their exploratory work on a subset of the full data set.

I disagree with your agreement :-)

I suspect that what the article is getting at is that when you deal with very large sets of data you have to think about different algorithmic approaches rather than the cookie-cutter style of "problem solving" that most software engineering courses focus on.

These kinds of problems require a very good understanding of not just the engineering side of things but also a comprehensive idea of statistical, numerical and analytical methods as well as an encyclopaedic knowledge of computability, complexity and information theory.

Just think about how different the Lucene [wikipedia.org] library or MapReduce [wikipedia.org]
  are from the way most developers would have approached the problems that these tools address.

Re:Data management problem (1)

StellarFury (1058280) | more than 4 years ago | (#29730965)

Clarification: the headline says "students" in science need to learn to think in internet-scale terms. This is clearly, clearly false, and bordering on stupid. There's no reason a chemist, biologist, or physicist needs internet-scale data sets if the systems they study are simply not that large.

The summary says computer scientists, which I only partially buy. Again, you have to be working in a field that uses those data sets. If you aren't, then what does all your upscaling knowledge do for you? Diddly. Basically, if you're a star person, a protein person, or a particle person, then you need this. If you aren't, you don't.

Re:Data management problem (2, Insightful)

Hognoxious (631665) | more than 4 years ago | (#29733483)

Surely a chemist should know about chemistry, a biologist about biology and so on.

If either needs to do computation beyond his own capabilities, he needs to get a CS person to help him. That's what specialists do, they specialise.

Re:Data management problem (1)

lewiscr (3314) | more than 4 years ago | (#29734219)

There's no reason a chemist, biologist, or physicist needs internet-scale data sets if the systems they study are simply not that large.

Yes, if the systems are not that large. But I think all three of your examples are poorly chosen. You picked the three groups of scientists that are (as a field) producing huge data sets. Have you seen the amount of data generated in a single run of a small particle accelerator? Plasma containment simulations? Chemistry simulations (esp. where it pertains to biology)? I get this list from reading slashdot. I'm sure there are a lot more fields that I'm unaware of.

Yes, not every physicist is working on a particle accelerator. But enough of them are that some of them will benefit from this training.

Re:Data management problem (1)

rikkards (98006) | more than 4 years ago | (#29729829)

No no, it's "learn think" not "think".

WTF? This is the second day in a row where I have seen a similar typo like this in the summary.

Re:Data management problem (0)

Anonymous Coward | more than 4 years ago | (#29735631)

irrelevant semantic difference, not a typo.

Re:Data management problem (4, Informative)

adamchou (993073) | more than 4 years ago | (#29730109)

thats absolutely not true. the process is vastly different when it comes to working with 100 MB or 10 petabytes. lets take databases for instance. if you have 100MB of data, you can just store the entire database on one server. when it comes to 100 PB of data, its even difficult to find the hardware capable of storing that much data. you need to start looking at distributed systems and distributed systems is such a broad field in itself.

when i graduated in 2005, a lot of the techniques i was taught worked great for working with database systems that handled a few hundred thousand rows. then i got a job at an internet company that had tables with over 80 million rows. all that normalization stuff i learned in school had to be thrown out. times may have changed now, but when i was in school, not only did i not learn how to handle "internet scale" data sets, i was taught the wrong methods to handle large data sets.

undergrad college students should at least get a basic intro to large data sets, if not have a class completely dedicated to learning on how to work with those data sets. school is supposed to prepare you for the work force. at least give the students the option to take a class that covers those topics if they want to go into those industries. i sure wish i had that option

Re:Data management problem (1)

MartinSchou (1360093) | more than 4 years ago | (#29730299)

then i got a job at an internet company that had tables with over 80 million rows. all that normalization stuff i learned in school had to be thrown out.

Why is normalization useless just because you have 80,000,000 rows? I'm genuinely curious

Re:Data management problem (2, Informative)

autocracy (192714) | more than 4 years ago | (#29730855)

One example: I deal with healthcare claims. We keep everything normalized on insertion, but we also create some redundant, denormalized tables (data warehousing). Almost every query needs the same basic claim information, but I'm doing it in a query with one or two joins instead of 10.

If something goes south with my manipulated tables, or I need a strange field, I still have my source data in a pure form. For a standard query, though, I can operate an order of magnitude faster by adding redundant tables that only have to be written once on insert by a trigger.

Re:Data management problem (2, Insightful)

Anonymous Coward | more than 4 years ago | (#29731219)

I don't think you're going against the spirit of normalized tables. You've added a persistent cache which happens to be implemented in the database, that's all. Most high-end databases support what you're doing via materialized views (or materialized query tables, or summary tables, or whatever; the name varies). The RDBMS basically just writes the triggers for you, but provides the added benefit of using the MQTs for optimization somewhat like an index. Properly done, you can write your queries against the (normalized) base tables, and the query planner will use the MQT instead if it can.

Really, the reason to push normalized tables is the whole "Code first; optimize later, if at all" thing. Put all your source data in the database because you never know exactly how much of it you need or can benefit from using. Normalize the tables because you never know exactly how you will be using them. Only when your code is quite stable will you know what queries are too slow or complex, and then you can optimize them by creating summary tables. Optimizing too soon will result in a lot of wasted effort and make your job harder down the road.

Re:Data management problem (1)

DragonWriter (970822) | more than 4 years ago | (#29732811)

One example: I deal with healthcare claims. We keep everything normalized on insertion, but we also create some redundant, denormalized tables (data warehousing). Almost every query needs the same basic claim information, but I'm doing it in a query with one or two joins instead of 10.

That sounds like just using a form of materialized views, which, while the implementation (in an RDBMS that doesn't implement them internally) may involve using triggers and denormalized "base" (in terms of the RDBMS, not the usage) tables, isn't really outside of the kind of basic database usage or anything novel. I'm somewhat surprised that, if you had more than incidental formal coverage of RDBMS's that materialized views weren't covered, and so would be something "outside" of what you learned about databases in school.

Re:Data management problem (1)

autocracy (192714) | more than 4 years ago | (#29743315)

I wasn't aiming to point out something outside. It doesn't have to be novel, or advanced. While Oracle implements materialized views, using MySQL I have to do it myself.

Denormalization has never been novel. The point was just to give the parent of my comment an example of denormalization, and why one might do it.

Re:Data management problem (1)

DragonWriter (970822) | more than 4 years ago | (#29746051)

I wasn't aiming to point out something outside. It doesn't have to be novel, or advanced.

The post you were responding to was itself responding to someone else saying that large datasets required throwing out "all that normalization stuff" they learned in school.

Using materialized views (whether canned or roll-your-own) in the way you describe (that is, with a normalized base schema to prevent the anomalies normalizations exists to prevent, and appropriate tables and triggers set up to present efficient-to-access materialized views on that base schema) isn't an example of that, since normalization is still used, and still used for the purpose it exists to serve.

Re:Data management problem (1)

adamchou (993073) | more than 4 years ago | (#29734525)

the most common reason i run into is because sometimes, a table becomes too big or receives too much traffic that a dedicated server is needed for it (i'm aware of the sharding option but thats much more difficult to implement and has limitations). so you split that table off from the rest of the tables. one table we had an issue with was our users table. now normally, you would split off some user data into other tables and do a join between the main user table and some of the secondary data on the other tables. but since these tables reside on different physical servers, foreign key constraints and joins are impossible. pulling from two different servers is going to drastically slow down the query. so what you need to do is data duplication to either copy the table (which you may then run into issues with real-time replication) or copy the rows directly into the users table. more info... http://en.wikipedia.org/wiki/Denormalization [wikipedia.org]

Re:Data management problem (1)

Alpha830RulZ (939527) | more than 4 years ago | (#29734955)

It's not accurate to say that normalization is useless - it's more accurate that normalization has costs that can become large, especially when you are sequentially processing an entire dataset.

  For example, consider the following
Employee table
Name| Position |salary
Andrew| programmer|40,000
Joe|tester|43000
Jane|programmer|60000 --fucking reverse discrimination!

If you were to normalize this, you'd factor out the position column into a positions table
position_key| position
1|pregrammer
2|tester

and change the employee table to
Name|position_key|salary
Andrew|1|40000
Joe|2|43000
Jane|1|60000

And you'd get the records by using 'select name, position, salary from employee join positions on employee.position_key = positions.position key where [some condition is true]|[1=1]

This is fine and good in an application database, and it has benefits for maintainability, etc. However, it creates an assembly job for the database on every query. If you were doing this for a company of 400 employees, no big deal. If you were doing this for a census tabulation for the US, it could add appreciably to cost and time for the job. If you have large datasets that you are processing sequentially, that assembly job can become untenably expensive, regardless of how good your DB engine is or how fast your hardware is. You eliminate that cost by accepting the denormalized form, which increases your disk usage, but may reduce your processing time. The DB optimizers try to handle this, but it still results in instruction cycles being applied.

So, like most things in data processing and the real world, it depends. This is a good example of the difference between computer science and software engineering.

Re:Data management problem (2, Insightful)

WarpedMind (151632) | more than 4 years ago | (#29731203)

I'm afraid you are limited by a short time horizon. I remember working and computing on systems where 100MB was just as difficult and expensive to deal with as 100PB is today. 2MB was the amount of mountable storage on small systems. Anything larger and you had to go to "big iron".

Real work was done on those small systems and good scientific principles and methods were they key then and the key now.

Just remember that the "laptop" 10 years from now will have over 8TB local SSD.

I operate an archive for the university. 10 years ago when we started it, a 10MB file was considered a pretty big file. Today it is the smallest size file we like to see stored in the archive. We store several PB's and I consider ours a small archive. A 100PB in a few years will be nothing. But those 100Exabyte files... now those will be difficult to work with. It will be "difficult to find hardware capable of storing that much data."

 

Re:Data management problem (1)

adamchou (993073) | more than 4 years ago | (#29734427)

when i said finding the hardware to store 100PB is difficult, i didn't mean finding enough harddrives that we could hook up to a network. thats easy to do. but if you have a 100PB file, what do you do? thats where you need distributed file systems like hadoop to come in. then you have the issue with trying to process that much data. running a processing thread on one node will take forever for the processing to get done.

Re:Data management problem (1)

ResidentSourcerer (1011469) | more than 4 years ago | (#29743727)

If this is really a concern for teaching, then perhaps the problem should be scaled down virtually:

Suppose that you implement virtual i386's with 4 MB ram and a 40 MB hard drive. Write a special driver for it to add a 80 ms delay for each disk access. Now instantiate 1000 copies of this VM on a machine or small cluster. This effectively turns a 3 GHz machine into a 3 MHz machine (Actually slower because of VM host overhead)

Each machine above is 1/1000 of a real machine. Now instead of needing a 10 PB data source you can get by with 10 TB. Class sets of 10 TB arrays is not unreasonable.

The data set is large compared to the resources of an individual VM. It's still large compared to the entire array of machines. (Data set is 2500 times the size of the entire VM array)

While learning this, students would start with a single VM and a smaller data set, and as the class progresses, they would get 2, 4, 8, 16... VMs and apply various algorithms for inter-machine communication.

Re:Data management problem (1)

2obvious4u (871996) | more than 4 years ago | (#29731445)

Well if Kryder's law holds then in 2042 we should have a 100PB iPod. Storing the data really isn't that hard anymore.

Re:Data management problem (1)

lewiscr (3314) | more than 4 years ago | (#29734357)

school is supposed to prepare you for the work force

University (what TFS mentions) is not supposed to prepare you for the workforce. University is supposed to teach to you think. Once you know how to think, it's your job to figure out how to work. If you went to a university to prepare you for the workforce, you got swindled. Go to Technical or Vocational school, like DeVry or ITT Tech.

That said, this genre of algorithms should be included in an algorithms class. At least introduce the concept, so that people that have learned how to think don't have to re-invent another wheel.

Re:Data management problem (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29730279)

Whether processing 100MB or 100 petabytes the process would be the same.

I disagree. From my perspective, as a research student in astronomy, I can set my desktop to search through 100MB of catalogued images looking for objects that meet a certain set of criteria, and expect it to finish overnight. If I find that I've made an error in setting the criteria, that's not such a big deal - I fix my algorithm, and get the results tomorrow.

With a 100PB archive, like the next generation of telescopes is likely to produce, I can't do that. I need more computing power. (There's a cluster at my university which we astronomers can borrow for tasks like this - fortunately, most such problems are trivially parallelisable.) I need to make sure I get my criteria right the first time, or people will be annoyed that I've wasted a week's worth of supercomputer time. And if I can do anything to make my search more efficient, even if it takes me a few days, it's worth doing.

It's issues like this that make me wish I'd studied a bit of computer science in my undergraduate years - which is, incidentally, exactly what TFA is talking about.

Re:Data management problem (1)

mrrudge (1120279) | more than 4 years ago | (#29730315)

Do the search on a known subset of data ( including samples of everything you need to detect ) on your local machine overnight, iterate, perfect, throw it at the large set of data ?

I did four and a half years of degree level computer science. Fun times ...

Re:Data management problem (0)

Anonymous Coward | more than 4 years ago | (#29730557)

Yes, that's one of the things I'd do with a larger data set.

There's a sliding scale between automating nothing (and doing it all by hand), and writing an AI that will analyse the data and write my thesis for me. :) Having larger data sets shifts the optimum towards the latter end of the scale.

Re:Data management problem (3, Interesting)

Hal_Porter (817932) | more than 4 years ago | (#29730373)

That's not true. The way you solve the problem changes radically depending on the amount of data you have. Consider

100 KB - You could use the dumbest algorithm imaginable and the slowest processor and everything is fine.

100 MB - most embedded systems can happily manage it. A desktop system can easily, even in a rather inefficient language. Algorithms are important.

100 GB - Big ass server - you'd definitely want to make sure you were using an efficient language and had an algorithm that scaled well, certainly to 2 processors and most likely to 4 processors. Probably should be 64 bit for efficiency.

100 PB+ You'd want a Google like system with lots of nodes. Actually I think at this point the code would look nothing like the 10 MB case. I remember someone saying that Google is "just a hash table". Now I think that misses the point. Google has invented things like Map/Reduce and has custom file systems. They've also spent a lot of time trying to cut costs by studying the effects of temperature on failure rates.

Now I think these guys are spouting buzzwords. But if you want to process 100PB of data on

Re:Data management problem (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29732317)

Google has invented things like Map/Reduce

Yikes. Google has been good about applying existing parallel and distributed computing concepts into their engineering, but they didn't invent the CS fundamentals. Map-reduce constructs are a basic idiom of most functional programs and parallel programs (whether functional or not) in scientific computing. What Google may have invented was a way to finally teach such basics to the hipsters who otherwise think the CS literature starts with their own first programming task.

Similarly, their Python guru Guido did not invent a bunch of programming language concepts so much as cherry pick and apply some into his own bastard language. In this regard, he has more in common with Larry Wall creating Perl than with the real programming language theorists who made all the breakthroughs since the early days of the Lambda calculus.

Re:Data management problem (1)

markov_chain (202465) | more than 4 years ago | (#29733615)

Now I think these guys are spouting buzzwords. But if you want to process 100PB of data on

Error: Out of comment memory on line 9. Aborting!

Re:Data management problem (1)

Hal_Porter (817932) | more than 4 years ago | (#29734185)

> Now I think these guys are spouting buzzwords. But if you want to process 100 PB of data on

Yeah, a high profile work interrupt came in at that point and unfortunately scrambled my slashdot post composing process. What I meant to say was

"Now I think these guys are spouting buzzwords. But if you want to process 100 PB of data on a Google like cluster of machines the way you do it is very different from 100 KB or even 100 MB on single processor machine.

Of course small systems have their own challenges - I've written code in C for very crippled embedded systems where even 100 KB is hard to process because you can't rely on keeping more than a few KB in RAM at any one time.

In between of course there's an easy case where you don't need to worry too much about scaling to multiple processors and you can fit everything in memory.

Now University only teaches you the easy case and worse it teaches you to sneer at the techniques you need to solve the difficult cases (very big or very small) as hacks. Of course it would be nice if we could all write single threaded code on systems with enough memory to make things easy, but the reality is that hardware to do that is too expensive for small embedded systems and not possible to build for the Google case."

Re:Data management problem (1)

Hognoxious (631665) | more than 4 years ago | (#29729833)

Quite. Dr Snow didn't need squillobytes of data to discover the cause of cholera, just a few hundred cases, some keen observation and a bit of intuition.

Re:Data management problem (4, Insightful)

Interoperable (1651953) | more than 4 years ago | (#29730247)

Yeah no kidding. I don't know if maybe that quote ('Science these days has basically turned into a data-management problem') was taken out of context, but I'm surprised a professor would say something that ignorant. I recently did a Master's in physics and it certainly didn't involve huge quantities of data; I ended up transferring much of my data off a spectrum analyzer with a floppy drive. (When we lost the GPIB transfer script I thought it would take too long to learn the HP libraries to rewrite it. That was a mistake, after 4 hours of shoving floppies in the drive I sat down and wrote a script in 2 hours, ah well.)

But the point is, a 400 data point trace may be exactly what you need to get the information your looking for. Just because we can collect and process huge quantities of data doesn't mean that all science requires you to do so, nor is simply handling the data the critical part of analyzing it.

Re:Data management problem (3, Insightful)

FlyingBishop (1293238) | more than 4 years ago | (#29730289)

It's also useless to say 'hey I'm analyzing this graph' if you're analyzing it wrong. I think you're missing the big picture. It's incredibly naive to think that the fundamental laws are simple enough to be grasped without massive datasets. It is possible, but all the data gathered thus far suggests that the fundamental laws of nature will not be found by someone staring at an equation on a whiteboard until it clicks. That is why Cern's data capacity is measured in terabytes, and they want to grow it as much as possible. That's why we have so much genetic data.

Scientific method and principles count, but they are not enough.

School Should Focus on Basics (0)

Anonymous Coward | more than 4 years ago | (#29732991)

Unfortunately businesses want to turn the U.S.A. education system into a head start training program. The problem is if you focus on specific technologies or techniques what is a student going to do when the skills are obsolete and they get "right-sized" out of the market. A solid understanding of basic principles and techniques for problem solving would go a long way to getting our level of education up where it should be. Then turn around and offer some cool tools and resources for projects, extra-curricular, or extra-credit. If a college or high school wants to design a special class to learn about how to use newer tools and newer tech, that is great but if the people in the class haven't mastered the basics of written or verbal communication, it is going to be a very very short class.

everybody can (1)

Fotograf (1515543) | more than 4 years ago | (#29729745)

everybody can capture ridiculous amount of data, do it smart and manage them is what makes a genius.

Re:everybody can (1)

CarpetShark (865376) | more than 4 years ago | (#29729775)

everybody can capture ridiculous amount of data, do it smart and manage them is what makes a genius.

Go ahead and manage it, genius. The rest of us just use Azureus for that ;)

Re:everybody can (1)

HNS-I (1119771) | more than 4 years ago | (#29729971)

Allow me to introduce to you mutorrent [utorrent.com], poor chap.

The article mentions hadoop which is an open source version of google's map-reduce template(I think you can call it). This is great and all but it is a fairly static mechanism and hardly the end-all of distributed computing. Shouldn't university students be working on the next generation?

Re:everybody can (1)

CompMD (522020) | more than 4 years ago | (#29733741)

Managing large amounts of data was a problem for the chief engineer for a project I worked on. This guy had a PhD in Aerospace Engineering and lots of professional and academic honors. I was running a wind tunnel test that was capturing 8 24-bit signals at 10kHz and writing the data to a csv. Now, he bought good hardware, but refused to pay for decent analysis software, mainly because he didn't know any. So I had to write a program to break up the data into files small enough that Excel could open them, and then he could work with them. I volunteered to write something with a database backend and use gnuplot to graph data, but noooooo, that would take precious engineering time.

Long story short, he ended up spending more time figuring out how to screw with Excel than he spent actually figuring out what the data meant. Of course, the customer had to pay for his lack of competence. I'm so glad I don't have to deal with that guy any more.

Generation R (0)

Anonymous Coward | more than 4 years ago | (#29729747)

If we are the generation Y, they will be the Generation R - from Ritalin

Re:Generation R (1)

bsDaemon (87307) | more than 4 years ago | (#29729915)

oh... I thought this was going to be some cleaver advertisement for the 'R' programming language -- http://www.r-project.org/

The LSST? (4, Informative)

aallan (68633) | more than 4 years ago | (#29729755)

Students are beginning to work with data sets like the Large Synoptic Survey Telescope, the largest public data set in the world. The telescope takes detailed images of large chunks of the sky and produces about 30 terabytes of data each night.

Err no it doesn't, and no they aren't. The telescope hasn't been built yet? First light isn't scheduled until late in 2015.

Al.

Re:The LSST? (2, Funny)

Thanshin (1188877) | more than 4 years ago | (#29729983)

You clearly aren't prepared to think in a future frame of reference.

That's the consequence of studying with equipment that existed at the moment you were working with it.

Future generations won't have that problem, as they're already studying with equipment that will be paid for and released to the university several years after their graduation.

Re:The LSST? (2, Interesting)

Shag (3737) | more than 4 years ago | (#29730011)

What aallan said - although, 2015? I thought the big projects (LSST, EELT, TMT) were all setting a 2018 target now.

I went to a talk a month and a half ago by LSST's lead camera scientist (Steve Kahn) and LSST is at this point very much vaporware (as in, they've got some of the money, and some of the parts, but are nowhere near having all the money or having it all built.) Even Pan-STARRS, which is only supposed to crank out 10TB a night, only has 1 of 4 planned scopes built (they're building a second), and has been having optical quality problems with that one. By the time kids born at the turn of the century are leaving high school, though, yes, we do expect things like these to be up and running.

But at the risk of sounding like that one college that publishes a list every year of what the freshman class of that year does and doesn't know, kids born around the turn of the century (my daughter is one) don't have the "OMG a TB!" mentality that we grownups have. The smallest capacity hard-drive my daughter will probably remember was 5 gigs - and that was in an iPod. Things like 64-bit, gigahertz speeds, multiprocessing, fast ethernet, wifi, home broadband... always been there. DVD-R media has, to her knowledge, always been there. (I did once have to explain to her that CDs used to be the size of platters and made of black plastic, after she found some Queensrÿche vinyl.)

She's ten now, and you can put a half-terabyte or more in a laptop, so while the idea of some big scientific project spitting out 50 or 60 laptops worth of data in a night is clearly a lot of data, it's not something that can't be envisioned.

Re:The LSST? (1)

oneiros27 (46144) | more than 4 years ago | (#29731399)

That was my first thought in reading this, too.

There *are* large data systems online now, even if they're not of the scope of LSST. The big difference is that the EOS-DIS (earth science) has funding to cover it stuff like building giant unified data centers (I think they pull 2TB/day ... per satellite), while the rest of us us in the "space sciences" are trying to figure out how to get enough bandwidth to serve our data, and using various distributed data systems (PDS, the VxOs, etc.). Once SDO finally launches (early next year?), we'll be generating over 2TB/day of useful data products (4TB/day of raw data), which is much larger than solar physics has been dealing with.

Oh ... and to make things fun -- as someone else commented about today's hard drive sizes -- because of requirements to get things certified by required deadlines, and planning for procurement lag, plus whatever launch delays (or construction delays for LSST) the data systems might be 3+ years old by the time there's first light.

(disclaimer -- if it wasn't obvious, I actually work with these 'big science' data systems)

A fantastic idea (2, Interesting)

Anonymous Coward | more than 4 years ago | (#29729765)

This is a great idea
    Even in business we often hit problems with systems that are designed by people that just dont think about real world data volumes. I work in the ERP vendor SPACE (SAP, ORACLE, PEOPLESOFT and so on) and their inhouse systems arent designed to simulate real world data and so their performance is shocking when you load real throughput into them. AND so many times have I seen graduates think Microsoft systems can take enterprise volumes of data - and are shocked when the build something that collapses under a few terabytes or so ! Im used to having to post millions of transactions a day and there isnt an MS system in the world that deals with that. No offence to MS - we use excel for reporting and drilldowns and access a lot but understanding the limitations of the tools what it can really handle and scale to is essential. As well as understanding what large data volumes actually are these days !

I know of a large bank that put in an ERP system using INTEL and MS SQL SERVER (with LOTS of press). We were a bit shocked actually because that bank was larger than we were and we had mainframes struggling to cope with our transaction load.
In fact I was hauled over the coals for the cost of our hardware - so i investigate. The INTEL / MS solution failed so miserably they quietly shut it down and moved back to their mainframe - no press !. It wasnt able to cope with the merest fraction of the load and couldnt have. However the people involved had no conception of what large meant ( and they thought that a faster processor was all you needed - it never occurred to them you get something for all the extra money you pay for in a mainframe !)

I think this is a terrific idea - but not only a the whole internet but they should teach this so the students understand these concepts for any large corporation they may work for !

Students don't need to think at internet scale (2, Insightful)

Rosco P. Coltrane (209368) | more than 4 years ago | (#29729777)

They just need to think. That's what they study for (ideally). Thinking people with open minds can tackle anything, including the "scale of the internet".

When I was in high school, I used a slide rule. When I entered university, I got me a calculator. Did maths or problem solving abilities change or improve because of the calculator? no. Student today can jolly well learn about networking on small LANs, or learn to manage small datasets on aging university computers, so long as what they learn is good, they'll be able to transpose their knowledge on a vaster scale, or invent the next Big Thing. I don't see the problem.

Re:Students don't need to think at internet scale (2, Insightful)

adamchou (993073) | more than 4 years ago | (#29730149)

A LOT of research has been put into improving algorithms for working on large scales. By not teaching our youth all that we have learned in school, they are just going to have to figure it out themselves an continue to reinvent the wheel. How are we supposed to advance if we don't put them in a situation to learn and apply our new found knowledge?

Re:Students don't need to think at internet scale (0)

Anonymous Coward | more than 4 years ago | (#29731539)

... By not teaching our youth all that we have learned in school, they are just going to have to figure it out themselves an continue to reinvent the wheel. How are we supposed to advance if we don't put them in a situation to learn and apply our new found knowledge?

We teach them to research and think for themselves. To think in first principles, so that they can solve a problem regardless of scale: they define the parameters and use logic to search out a system to solve it.

We teach them not how to make the wheel, or how to necessarily re-invent it every time, but rather how find if anyone has already constructed one. If someone has, then how to attach to the axle they have; if no one has, then how to take wood/steel/rubber/etc., and make their own.

At least that's what I always though CS and engineering was about. If you want cookie cutter formulas go to a trade school to become a code monkey/plumber/electrician. We need the latter skills just as much as the former in any society, but the two should not be confused with each other.

Re:Students don't need to think at internet scale (1)

Alpha830RulZ (939527) | more than 4 years ago | (#29735051)

They can do it the same way that us geezers have had to do it, by figuring out that something is important and studying it on your own. Says the guy with grey hair and an accounting degree who is building a Hadoop based prototype to test replacing mainframe processing systems with a map-reduce approach.

Re:Students don't need to think at internet scale (2, Informative)

Strange Ranger (454494) | more than 4 years ago | (#29730181)

I don't see the problem.

^Maybe this illustrates the point?

Really really big numbers can be hard for the human brain to get a grip on. But more to the point, operating at large scales presents problems unique to the scale. Think of baking cookies. Doing this in your kitchen is a familiar thing to most people. But the kitchen method doesn't translate well to an industrial scale. Keebler doesn't use a million gallon bowl and cranes with giant beaters on the end. They don't have ovens the size of a cruise ships. Just because you can make awesome cookies in your kitchen doesn't qualify you one bit to work for Keebler.
Whether it's cookies or scientific inquiry it's a good idea to prepare students to process things on the appropriate scale.

Re:Students don't need to think at internet scale (2, Funny)

vxvxvxvx (745287) | more than 4 years ago | (#29730267)

So when it comes to really really big numbers, we need to rely upon elfs in trees?

Re:Students don't need to think at internet scale (1)

Yvanhoe (564877) | more than 4 years ago | (#29730307)

Shhhh, let them start their One Supercomputer Per Child program. It can only be good.

Re:Students don't need to think at internet scale (1)

jc42 (318812) | more than 4 years ago | (#29735077)

We might note that in 1970, a computer with the capacity of the OLPC XO would have been one of the biggest, fastest supercomputers in the world. And you couldn't even buy a computer terminal with a screen that had that resolution. Now it's a child's (educational) toy.

The first computers I worked with had fewer bytes of memory+disk and a slower process than the "smartphone" in my pocket. (Which phone doesn't matter; it'd be true for all of them. ;-)

Do we all work with all the data in internet? (0)

Anonymous Coward | more than 4 years ago | (#29729799)

Now you are focusing in a problem of small area (big sets of data), which is ok in itself.

Just don't forget that small scale makes all the difference.

Why? (1)

benjamindees (441808) | more than 4 years ago | (#29729817)

Add me to the list of people who think this is a solution in search of a problem.

Oh, who the hell am I kidding. I'm sure the problem they have in mind has something to do with spying on people.

Wrong (2, Insightful)

Hognoxious (631665) | more than 4 years ago | (#29729819)

Summary uses data and information as if they are synonyms. They are not.

yes, but ... (1)

oneiros27 (46144) | more than 4 years ago | (#29731517)

Because of the computing power to generate the higher level data products, some data systems are serving level 1 data (calibrated data), not the raw sensor recordings (level 0).

Knowledge of the sensor's characteristics are thus encoded into the products being served, and this, from an Information Science standpoint, you could characterize the higher level data products as "Information", not "Data". ... see, I *did* actually read the first chapter of Donald Case's book [amazon.com]. (although, I proved that by criticizing it when I met him at the ASIS&T annual meeting a few years back, and he said he had just sent the second edition to press, and could've used the comments a little earlier)

Not until Internets are improved... (0)

Anonymous Coward | more than 4 years ago | (#29729827)

... and opened up for anyone to use, as well as more datasets opened freely for anyone to use.

These 2 things are holding back innovation in so many areas.
Damn ISPs and their laze. (read: greed)

Indeed (5, Interesting)

saisuman (1041662) | more than 4 years ago | (#29729861)

I worked for one of the detectors at CERN, and I strongly agree with the notion of Science being a data management problem. We (intend to :-) pull a colossal amount of data from the detectors (about 40 TB/sec in case of the experiment I was working for). Unsurprisingly, all of it can't be stored. There's a dedicated group of people whose only job is to make sure that only relevant information is extracted, and another small group whose only job is to make sure that all this information can be stored, accessed, and processed at large scales. In short, there is a lot that happens with the data before it is even seen by a physicist. Having said that, I agree that very few people have a real appreciation and/or understanding of these kinds of systems and even fewer have the required depth of knowledge to build them. But this tends to be a highly specialized area, and I can't imagine it's easy to study it as a generic subject.

Re:Indeed (0)

Anonymous Coward | more than 4 years ago | (#29730769)

40 TB/sec? I'm impressed. In radio astronomy, the ASKAP telescope is only going to manage about 10 TB/sec, and it won't be online until 2012.

Re:Indeed (1)

saisuman (1041662) | more than 4 years ago | (#29731705)

Our experiment (LHCb, not to be confused with the collider, called the LHC) looks for specific types of particle collisions. The rate of the collisions determines the frequency of sampling the sensors, and the number of sensors (and the number of bits read out from each sensor) determines the size of each sample. But with a combination of some really fast electronics and a large cluster of general-purpose servers we end up getting this rate down to somewhere between 150 to 300 MB/sec. This is (designed to) read out about 8-9 hours a day for 9 months a year. The design was quite interesting, mainly because at each level, you had to have a team that had people with deep knowledge of detectors, physics, electronics, and computer systems to decide what to retain, how to retain, and how to discard what. I've been told that it's been a long time since we just read out everything that came out of a detector and analysed it later. (IANAP)

Re:Indeed (1)

BJ_Covert_Action (1499847) | more than 4 years ago | (#29733247)

Unsurprisingly, all of it can't be stored. There's a dedicated group of people whose only job is to make sure that only relevant information is extracted, and another small group whose only job is to make sure that all this information can be stored, accessed, and processed at large scales.

I didn't know they needed perl coders at CERN. No wonder everyone is afraid of the LHC destroying the world...

=P

Re:Indeed (1)

jtownatpunk.net (245670) | more than 4 years ago | (#29734029)

This is nothing new. I worked at a university back in the early 90s and the center for remote sensing and optics was pulling in more data every single day than most department servers could hold. Their setup was both amazing and frightening. Just a massive pile of machines with saturated SCSI controllers. One of their big projects was to build a 4tb array. But 9.6 gig drives were just trickling into the market at that time. You'd need over 400 of those just to provide 4tb of raw storage. Nevermind parity and redundancy. And even if they did manage to design the system, the cost...

But my point is that scientists and their support groups have been managing large sets of data for as long as there's been scientists generating data to manage. We've ramped up the capacity and efficiency of our storage technology and they've ramped up the amount of data they collect and the amount of processing they do to it.

Internet scale of petabytes of data... (1)

fatp (1171151) | more than 4 years ago | (#29729865)

As an Internet user, I really can't imagine how I can download / upload petabytes of data, in my whole life.

Re:Internet scale of petabytes of data... (0)

Anonymous Coward | more than 4 years ago | (#29729873)

First of all, you need huuuuuuuuge tubes.

Re:Internet scale of petabytes of data... (0)

Anonymous Coward | more than 4 years ago | (#29729993)

Get a better ISP :)

A 3mbit/s connection averaging 80% utilization is roughly 1GB/hour. Downloading 1 petabyte (10^15) at this rate takes 10^6 hours. This is 42 kilodays, or 114 years. You can do it, if you start young enough and live long enough.

Re:Internet scale of petabytes of data... (1)

troll8901 (1397145) | more than 4 years ago | (#29732229)

This is 42 kilodays, or 114 years. You can do it, if you start young enough and live long enough.

This sounds exactly like the savings plan offered by my local bank, with a massive 0.00001% interest rate compounded. Hooray to financial freedom within 114 years!

Huge Misstatement (3, Insightful)

Jane Q. Public (1010737) | more than 4 years ago | (#29729875)

"Science these days has basically turned into a data-management problem," says Jimmy Lin.

This is about the grossest misstatement of the issue that I could imagine. Science is not a data-management problem at all. But it does, and will, most certainly, depend on data management. They are two very different things, no matter how closely they must work together.

Re:Huge Misstatement (1)

Shrike82 (1471633) | more than 4 years ago | (#29729935)

Exactly. These snappy one-liners are annoying and almost always innacurate. I dabble in Data Mining and while signifcant breakthroughs can be made by trawling through large amounts of mostly useless data, the most pertinent discoveries usually relate to just a few significant data features. More time and effort should be devoted to managing how much data gets produced and ensuring that what you do store is highly likely to be useful.

Re:Huge Misstatement (0)

Anonymous Coward | more than 4 years ago | (#29730721)

data-mining != data-management...

The Petabyte Problem (4, Insightful)

ghostlibrary (450718) | more than 4 years ago | (#29729881)

I wrote up some notes from a NASA lunch meeting on this, titled (not too originally, I admit) 'The Petabyte Problem'. It's at
http://www.scientificblogging.com/daytime_astronomer/petabyte_problem [scientificblogging.com]. It's not just a question of thinking on the 'Internet scale', but about massive data handling in general.

What makes it different from previous eras (where MB was big, where GB was big) is that, before, the storage was expensive, yes, but bandwidth wasn't as much of a trouble for transmitting, if even locally. You could store MBs or GBs on tape, ship it, and extract the data rapidly-- bus and LAN speeds were high. Now, with PB, there's so much data that even if you ship a rack of TB drives and hook it up locally, you can't run a program on it in reasonable time. Particularly for browsing or inquiries.

So we're having to rely much more on metadata or abstractions to sort out which data we can then process further.

Re:The Petabyte Problem (1)

belthize (990217) | more than 4 years ago | (#29730363)

Agreed, more or less.

If you pick a random starting point, say the mid/late 80's the rate of improvement for CPU speeds, bus speeds, network speeds, disk speeds and disk sizes were similar. Their doubling rates differences were in months not years or decades. Through the 90s and the last 10 years what worked in the late 80s continued to more or less work.

Disk capacity has had the fastest doubling times while networks have had the slowest over the past two decades. The resulting difference between now and then, what appear to be whopping big data sets that are difficult to transmit and require more parallelism to reduce.

Re:The Petabyte Problem (1)

zrq (794138) | more than 4 years ago | (#29733987)

Computer, show me all ship-like objects, in any profile. Ah, there it is."

We are working on it IVOA [ivoa.net].

Well... (2)

DavidR1991 (1047748) | more than 4 years ago | (#29729909)

If you swap the focus from smaller size problems to the mega-scale problems, then you get a bunch of students who can only do mega-scale problems (reverse of the trend the article talks about)

Here's the rub: It's easier to scale up than it is to scale down. Most big problems are made up of lots of little problems. Little problems are rarely made up of mega-scale problems...

I think what they need to do is to keep the focus on the small/'regular' stuff, but also show how their knowledge applies to the "big stuff" (so they can 'see' problems from both ends) - not just focus on one or the other

Re:Well... (1)

Cederic (9623) | more than 4 years ago | (#29729995)

Without disagreeing with you, I'd suggest that small scale problems have different answers to large scale ones.

The obvious approach is thus to teach both.

Although there are a lot of petabyte scale problems out there, as a proportion of the total problem space they are still minute. Most students wont need to work on them.

Further to that, there's no point being able to address a large scale problem if the building blocks you're using (which individually need to deal with individual data points) aren't sufficiently optimal.

Taking Google as an example, they deal with frankly astonishing volumes of data (both stored and in transit) and have designed their systems to handle that volume. They disregard some low scale issues because redundancy and volume smooth the bumps. Knowing which small scale issues to ignore requires a fairly fundamental understanding at that micro scale and how it translates to the macro level at which they operate.

Hmm, I think I just repeated your point.

IBM (1)

sdiz (224607) | more than 4 years ago | (#29729997)

... a director at IBM's Almaden Research Center

He is just trying to sell some mainframe computer.

Work at enterprise... (2, Interesting)

SharpFang (651121) | more than 4 years ago | (#29730005)

It was a very surprising experience, moving from small services where you get 10 hits per minute maybe, to a corporation that receives several thousands hits per second.

There was a layer of cache between each of 4 application layers (database, back-end, front-end and adserver), and whenever a generic cache wouldn't cut it, a custom one was applied. On my last project there, the dedicated caching system could reduce some 5000 hits per second to 1 database query per 5 seconds - way overengineered even for our needs but it was a pleasure watching the backend compressing several thousands requests into one, and the frontend split into pieces of "very strong cache, keep in browser cache for weeks", "strong caching, refresh once/15 min site-wide", "weak caching, refresh site-wide every 30s" and "no caching, per visitor data" with the first being some 15K of Javascript, the second about 5K of generic content data, the third about 100 bytes of immediate reports and the last some 10 bytes of user prefs and choices.

Sooo true! (1)

psnyder (1326089) | more than 4 years ago | (#29730009)

'If they imprint on these small systems, that becomes their frame of reference and what they're always thinking about,' said Jim Spohrer

That is SOOO true! I mean, I was brought up on my Commodore 64, and I have NO IDEA how to to contemplate petabytes of data! (What does that EVEN MEAN?!?) I still don't see why ANYONE would need more than 64kB of memory.

Needle in the haystack ... (1)

foobsr (693224) | more than 4 years ago | (#29730053)

'Science these days has basically turned into a data-management problem,'

The assumption here is that with 'size of data-set approaching infinity' the probability of finding a random result is approaching 1. Ph.D. students might like that.

CC.

data reduction is it's own discipline (0, Troll)

petes_PoV (912422) | more than 4 years ago | (#29730087)

A degree course is the first step, not the final result in a worthwhile scientific education. You don't expect to teach every student every technique they might use in every job they could get. Most of them won't even go into research - so there is a lot of waste teaching people skills that only a few will need. Far better to focus on the foundations (which could well include the basics of data analysis), rather than spending time on the ins and outs of products that are in use today - and will therefore be obsolete when they graduate and need to use that skill.

You could very well argue that it's not even a scientists job to turn petabytes of data into kilobytes of information - that's a technicians role. Scientists are there to create the knowledge, not do the lab assistant's job.

Since I was very young (1)

C0quette (1466487) | more than 4 years ago | (#29730389)

Some have the attitude for juggling with exabytes. Since I was very young I've realized I never wanted to be human size. So I avoid the crowds and traffic jams. They just remind me of how small I am. Because of this longing in my heart I'm going to start the growing art. I'm going to grow now and never stop. Think like a mountain, grow to the top. Tall, I want to be tall. As big as a wall. And if I'm not tall, then I will crawl. With concentration, my size increased. And now I'm fourteen stories high, at least. Empire State Human! Just a born kid, I'll go to Egypt to be the pyramids. Brick by brick. Stone by stone. Growing till I'm fully grown. Fetch more water. Fetch more sand. Biggest person in the land. The Human League.

2first (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#29730465)

the reaper BSD's not so bad. To the A 7ull-time GNNA and Michael Smith faster chip and some of the Rivalry. While available to the project to

Past Time to Stop Using int (1)

scruffy (29773) | more than 4 years ago | (#29730943)

Is there a single intro to programming book that uses long in favor of int? Just like double has replaced float for almost all numerical calculations, we need long to replace int.

Re:Past Time to Stop Using int (0)

Anonymous Coward | more than 4 years ago | (#29731795)

No. We need to people to use the correct sized object for the item they're storing.

If you're storing a counter which goes from 0 to 24950 not much point using a long - unless you're worried about the performance aspects of fetching 8 bits on a 32 bit bus, but most of the time you are more concerned about memory foot print.

Re:Past Time to Stop Using int (1)

scruffy (29773) | more than 4 years ago | (#29733681)

We need to people to use the correct sized object for the item they're storing.

"Premature optimization is the root of all evil."

That quote aside, I agree with you, but also would claim that a long is correct-sized for more integers than int. Yes, it uses more space, but that is a reasonable tradeoff for safety.

Re:Past Time to Stop Using int (1)

cervo (626632) | more than 4 years ago | (#29733897)

I would claim on a 32 bit processor integer operations with 32 bits will be more efficient. I would also claim that on a 64 bit processor/operating the size of int should be 64. similar to the way on windows 3.1 int was 16 bits and windows 95 int was 32 bit. The forced upgrade for 32 bit apps.... soon we'll probably have another forced upgrade to 64 bit apps.

Proof of Ignorance (0)

Anonymous Coward | more than 4 years ago | (#29736613)

This overwhelming data issue points to a basic fact. The universe contains a sum of information that we may label X. Humanity at its best operates with way less than 1% of X which defines our species as being better than 99% lost in ignorance. In essence the noble human mind operates with, in effect, an intelligence that might as well be as low as the common Earth worm. Providing the entire universe with a humorous display as we have all kinds of social kinkiness in assigning our notions of intellectual and academic abilities to our fellow dumb as a rock humans.

Datasets of interest (1)

xenocide2 (231786) | more than 4 years ago | (#29738431)

Part of the problem is that young students fresh out of high school have no pet datasets. For many, they're buying a new laptop for college and keeping, at most, their music. Chat logs, banking, browsing history; it hasn't occurred to them to keep these things. Hell, I doubt few CS students make backups of their own computers. I know I didn't.

Without a personal dataset of interest to maintain and process, you'll find little demand from students for classes on large dataset computations. Unless they enjoy astronomy, or biology, or whatever, in which case they're likely in a different major. If we want to train CS majors to help in other fields, we need to promote and identify personal data first.

As simple as possible, but no simpler (0)

Anonymous Coward | more than 4 years ago | (#29738995)

Hopefully the instructors are being a bit more sensible than the summary implies and are teaching students that problems at different scales require different approaches to finding solutions. For a small embedded system, simplicity and efficiency are key. Too many levels of abstraction and caching and you will have a lousy system that barely runs on the target processor. At the opposite end of the scale, appropriate abstractions and caching are absolutely essential in order to effectively manage complex systems with large numbers of transactions or large volumes of data (or both). Keep things too simple and the system will fail to scale adequately.

For any given system you want to try to hit that sweet spot of engineering design: keeping things as simple as possible, but no simpler.

Bandwidth isn't the only issue with Internet Scale (1)

GrpA (691294) | more than 4 years ago | (#29740263)

Working with a small firewalled service provider that is reasonably large in terms of IP Allocation (Over half a million addresses) I'm constantly amazed that none of the design engineers I encounter seem to envision the number of sessions a firewall has to cope with.

It's frustrating that we keep encountering firewalls with 10 Gbps + claimed throughput that fall over at barely more than 100 Mbps due to resource exhaustion and then the vendor engineers try to tell us that's because we aren't bonding the NICs.

It seems that no matter how often I explain it to them, they just can't get their heads around the idea that our problem isn't bandwidth, it's number of sessions.

The scale of the Internet isn't just measured in X x bits per second. There are other dimensions to it as well.

GrpA

Data management issue (0)

Anonymous Coward | more than 4 years ago | (#29740369)

That's why I run "Einstein@home" to help with the search for neutron stars using LIGO (gravitational wave detector) data. If every geek gave up some hard drive space and processor time on all their boxes...

Ram

Hey I knew that! (1)

Whiteox (919863) | more than 4 years ago | (#29741063)

It's like this:
Learn to play all the campaigns on Age of Empires II of which there is a population limit of 75.
Repeat for a number of years until you are perfect and the most efficient.
Then go play a network AOEII game with a pop cap of 200 and you will invariably lose because you can't get your head around it.
The game is simple, yet hard to manipulate when scaled up and takes a lot more effort to win. And that's only changing one variable.

Pedants should be Pedantic (1)

BonysGambit (1316469) | more than 4 years ago | (#29746403)

When we speak of "Science" in a general sense, it's about using the Scientific Method to pursue a goal or enhance our knowledge. This has nothing to do with the size of the data accumulated to perform the task. These days, all of us are learning to think at "Internet Scale." Join Facebook and "befriend" 200 million people. Enroll in LinkedIn and you have 40 million possible connections. National debts are measured in numbers with more zeros than ever used before to describe money. In other words, every field of human endeavour these days, presents its own data management problem. If I may introduce the crass topic of business into such a rarified air of Science; in today's Inbound Marketing arena, the volume of data being accumulated about Visitors to one's website, some of whom become Prospects and then Clients, is literally Internet sized. So what's a person to do? Same thing we've always done - automate to handle it. We have used technology and tools to overcome human limitations since the first ape used a bone as a hammer (if you liked the movie 2001's analogy). So marketers today can use Sales and Marketing Automation to reduce huge data sets to usable and understandable sizes, in the same way that any other field will employ computer methods to do the same. Data management problems, in other words, are a field unto themselves, requiring specialists such as DBAs, Hardware and Software Engineers. Not Scientists in the general sense, but specialists. There's more on these ideas at http://www.inbound-marketing-automation.ca/blog/ [inbound-ma...omation.ca]
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...