Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Cloud Science

Ask Slashdot: Scientific Computing Workflow For the Cloud? 80

diab0lic writes "I have recently come into the situation where I need to run cloud computing on demand for my research. Amazon's EC2 Spot Instances are an ideal platform for this as I can requisition an appropriate instance for the given experiment {high cpu, high memory, GPU instance} depending on its needs. However I currently spin up the instance manually, set it up, run the experiment, and then terminate manually. This gets tedious monitoring experiments for completion, and I incur unnecessary costs if a job finishes while I'm sleeping, for example. The whole thing really should be automated. I'm looking for a workflow somewhat similar to this:
  1. Manually create Amazon machine image (AMI) for experiment.
  2. Issue command to start AMI on specified spot instance type.
  3. Automatically connect EBS to instance for result storage.
  4. Automatically run specified experiment, bonus if this can be parameterized.
  5. Have AMI automatically terminate itself upon experiment completion.

Something like docker that spun up on-demand spot instances of a specified type for each run and terminated said instance at run completion would be absolutely perfect. I also know HTCondor can back onto EC2 spot instances but I haven't really been able to find any concise information on how to set up a personal cloud — I also think this is slight overkill. Do any other Slashdot users have similar problems? How did you solve it? What is your workflow? Thanks!"

This discussion has been archived. No new comments can be posted.

Ask Slashdot: Scientific Computing Workflow For the Cloud?

Comments Filter:
  • EC2 is scriptable (Score:5, Informative)

    by Anonymous Coward on Friday November 29, 2013 @03:16PM (#45556959)

    EC2 is inherently scriptable. There's nothing stopping you from using the command-line tools to fire up an instance, and let it run, and store its results to S3, and then decommission the instance. You can even set the instances to terminate on shutdown, which deletes the instance EBS stores (if you're using EBS) and deletes the instance. Sounds like you just need to spend 30 minutes reading the docs.

    • I was about to say, if you can't figure this out, no wonder you need a super computer to run your code. :-D

    • by diab0lic ( 1889826 ) on Friday November 29, 2013 @05:22PM (#45557641)
      I'm aware that EC2 is inherently scriptable, though the documentation is incredibly poor for some areas, and heavily favours those interested in long running instances. This post is about asking others what their workflow for short term spot instances is, and generating some collaboration and sharing of ideas on the subject. Looking through the other comments there is a PhD who wrote some of his own scripts using boto (complains about its docs -- trend here?), someone working on a product to do this (wonder why he sees a business case for this?) . The comments in this thread are evidence enough that there is hardly any consensus on how to do this easily and elegantly. To all those shouting RTFM, you've clearly never read the EC2 docs or tried to use them for this use case. They are hardly adequate, just take a look at their scientific computing page (http://aws.amazon.com/ec2/spot-and-science/) Not a single person here has said something along the lines of "RTFM -- I did and it allowed me to easily do something similar." Just saying RTFM because you can doesn't help, nor does it mean anything if the docs are inadequate for the use case in question.
      • Cycle Computing has the Jupiter Job Scheduler that was used in a /. article a couple of weeks ago:

        tech.slashdot.org/story/13/11/13/1754225/121-petaflops-rpeak-supercomputer-created-with-ec2

        Jupiter, or one of their other products may be exactly what you are looking for. It takes care startup and shutdown of the VMs and can even bid on the spot instances for you. IIRC they even had different packages available depending on the number of instances required, and service required.

        Good Luck!
        Cheers:)

      • Hate to break the news to ya, but it's not too hard; I set up such a thing in an afternoon to generate traffic to load test an app I am developing. The commandline tools are pretty well documented for this standard workflow.

        I do the first part manually, using the web console: 1)
        1) launch an instance, install your code on it. Bonus points: write a script to parse the UserData so you can tell it where to pull the source data from (I keep such things in S3 if needed)
        2) use that instance to create an AMI.
        3) Use

    • by dotancohen ( 1015143 ) on Friday November 29, 2013 @07:10PM (#45558211) Homepage

      EC2 is inherently scriptable. There's nothing stopping you from using the command-line tools to fire up an instance, and let it run, and store its results to S3, and then decommission the instance.

      You are correct that what you propose is easy and well documented. However, that is not what the OP needs.

      The OP needs lower-priced spot instances, which are intermittently available and designed exactly for this workflow. When the entire AWS datacenter has some spare capacity, these spot instances turn on for those who requested them to run (usually to crunch data that is not time-sensitive). The use and configuration of these instances is not so well documented, probably because you cannot run a webserver on them and that seems to be the focus of much AWS documentation. However, it is exactly these 'spot instances' which are in my opinion the genius of the cloud: they let the heavy, non-time-critical work (i.e. scientific computing) be done when the webservers and mailservers aren't so busy, thus flattening out the daily CPU demand curve.

      The OP should start here:
      http://aws.amazon.com/ec2/spot-tutorials/ [amazon.com]

      And end here:
      http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/tutorial-spot-adv-java.html [amazon.com]

      • by hawguy ( 1600213 )

        EC2 is inherently scriptable. There's nothing stopping you from using the command-line tools to fire up an instance, and let it run, and store its results to S3, and then decommission the instance.

        You are correct that what you propose is easy and well documented. However, that is not what the OP needs.

        The OP needs lower-priced spot instances, which are intermittently available and designed exactly for this workflow. When the entire AWS datacenter has some spare capacity, these spot instances turn on for those who requested them to run (usually to crunch data that is not time-sensitive). The use and configuration of these instances is not so well documented, probably because you cannot run a webserver on them and that seems to be the focus of much AWS documentation. However, it is exactly these 'spot instances' which are in my opinion the genius of the cloud: they let the heavy, non-time-critical work (i.e. scientific computing) be done when the webservers and mailservers aren't so busy, thus flattening out the daily CPU demand curve

        Why can't you run a webserver on a spot instance? I'm not aware of any restrictions on what you can and cannot run on a spot instance. If the dynamic IP is the problem, then either register the dynamic IP at a dynamic DNS provider, register it as a route-53 IP, or use the EC2 command line tools to attach it to a static Elastic IP address.

        The EC2 API is not complicated (and can run at the command line, and has bindings for common scripting languages), and you can do pretty much anything you want with an inst

        • Technically one could run a webserver on a spot instance, but the availability of said server will be inversely proportional to datacenter load instead of proportional to website demand. Do you not see why that is a bad idea?

          • by hawguy ( 1600213 )

            Technically one could run a webserver on a spot instance, but the availability of said server will be inversely proportional to datacenter load instead of proportional to website demand. Do you not see why that is a bad idea?

            Depends on why you want to run the webserver. You can register it to a load balancer after startup. When you run out of spot instances for you web server, then you can start up full paid instances to pick up the slack.

      • Re:EC2 is scriptable (Score:4, Interesting)

        by Salis ( 52373 ) on Friday November 29, 2013 @10:15PM (#45558909) Journal

        The OP needs lower-priced spot instances, which are intermittently available and designed exactly for this workflow.

        Here's how to utilize lower-priced spot instances for scientific computing:

        1. Set up one long-running, low-cost instance (a small is fine) that creates a distributed queue using Amazon's SQS, and adds jobs to the queue corresponding to each "unit" of the relevant computational problem of interest. New jobs can be added using a command line interface, or through a web interface.

        2. Create a user start-up Bash script for the spot instances that runs your main program -- I prefer using Python and boto for simplicity. The main program should connect to the SQS queue, and begin an "infinite" while loop. Inside the loop, the next job off the queue is pulled, containing the input parameters that define the "unit" of the computational problem of interest. These input parameters are fed to the main algorithm, and the resulting output is uploaded to Amazon S3. The loop continues.

        3. Any time the queue is empty or the spot instance remains idle for ~5 minutes, the spot instance then auto-terminates using EC2's command line interface.

        4. Finally, just write a simple Python script to pull all the results off S3, combine & analyze them, and export to another useful format.

        You'll also need to set up your spot instance price threshold, and make sure the queue has jobs to run. That's it, it's fairly simple.

  • but I haven't really been able to find any concise information on how to set up a personal cloud

    You mean a computer? A server farm? A beowulf cluster?

    To me, 'personal cloud' is a totally meaningless term and doesn't correspond to what the cloud is. If it's a couple of servers you own and control, to me that doesn't sound like 'cloud computing' -- it sounds like a marketing term.

    • by ihtoit ( 3393327 )

      to me, a cloud on a local level is:

      data cluster
      process cluster (could be the same cluster, or a process cluster with a SAN, storage array or just a honkin' huge hard drive)
      and an interface (could be VMWare or Virtualbox or as simple as Remote Desktop (Windows))

      - which anybody on the sub/network with the correct credentials can access at any time.

      This is different from multiple accounts on a personal computer which is subject to the power state of the system: a cloud's accessibility would by definiti

    • You certainly can have a personal cloud, or an internal cloud, or a private cloud.

      The term cloud is one of those that people seem to go out of their way on Slashdot in order to misconstrue or misunderstand, when in fact its simple - its a resource that you want to do X but you don't necessarily want to know the indepth details of how it goes about it. I want a website hosted, I want it redundant and I want it scalable, but I dont necessarily want to give a toss about manually balancing resources across sev

  • SC13 (Score:2, Informative)

    by jsimon12 ( 207119 )

    Bunch of papers at SC13 presented this year. Suggest sunny look them up.

    http://sc13.supercomputing.org/content/papers [supercomputing.org]

  • by onyxruby ( 118189 ) <onyxruby&comcast,net> on Friday November 29, 2013 @03:18PM (#45556987)

    Does exactly what you need and is designed explicitly for integration with third party tools. Spins up everything from disks to automating webforms and jobs and imports and exports of jobs. There really isn't anything else out there that comes close to what Workflow will do. Used to be called Altiris Workflow. Works with everything from CMDB, change management, service desk to multiple languages.

    http://www.symantec.com/connect/articles/learn-about-symantec-workflow [symantec.com]

    • by afidel ( 530433 )

      And since it's from Symantec expect exactly zero support beyond "is it plugged in" level scriptbots. We dumped Netbackup after over a decade of use due to the fact that even with a $200k purchase on the line and a regional VP involved we couldn't get effective support.

  • by gdek ( 202709 ) on Friday November 29, 2013 @03:20PM (#45556997)

    Because your workflow is likely to be customized to your tasks, it should be straightforward to write these kinds of tools yourself, with any number of available toolkits, based on what language you're most comfortable using.

    There's the straight CLI: http://aws.amazon.com/cli/

    And lots of sample code for the various SDKs: http://aws.amazon.com/code

    Best to just dive in. If you have any development experience at all, even just scripting, you should be able to figure it out pretty quickly.

  • Since my scientific workflow always includes Python it is natural for me to use boto.

    https://github.com/boto/boto
    http://boto.readthedocs.org/en/latest/
    http://aws.amazon.com/sdkforpython/

  • You could use GlideinWMS, which was made to manage a pool of dynamic grid resources for scientific computing, such as the Open Science Grid. It can also manage personal Condor pools too. I believe it can also connect to Amazon EC2, but I don't see a lot of information on their web-page about that. You may have to contact them for more information, but I know that the team is very responsive and interested in finding more scientific users. You can find more information here: http://www.uscms.org/Software [uscms.org]
  • To the OP: Please refer to the provided documentation or use a search engine to find tutorials, if you dare. There is an official API for this. We won't recite manuals here.

    To ./ community: Why is a question that can be answered with a "rtfm" landing on the front page?

  • Jenkins would probably be useful in this case, with this plugin:

    https://wiki.jenkins-ci.org/display/JENKINS/Amazon+EC2+Plugin [jenkins-ci.org]

  • You can create your own personal cloud, call it private cloud, and then automate all your tasks. I have been doing the same, I utilised fabric (for automation), boto (euca2ools) for controlling the cloud (creating instances, volumes, etc). Eucalyptus helps you create your own private cloud, you will have your IaaS implementation easy. OpenStack has a growing following, you may prefer to adopt it than Eucalyptus. There are lots of other available tools however.

  • by Fubar420 ( 701126 ) on Friday November 29, 2013 @03:53PM (#45557209)

    Amazons http://aws.amazon.com/cloudformation/ [amazon.com] can get you 95% of the way there (add a few small scripts via Boto, or some integration with http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html [amazon.com])

    A little elbow grease will get you the rest of the way without additional costs.

  • This is sort of a scripting issue, and Powershell has modules for everything under the sun-- including Amazon:
    http://aws.amazon.com/powershell/ [amazon.com]

    Not sure whether your instances themselves are running windows, but if so that would be even easier to integrate.

  • Have you tried Google or the AWS documentation? What you are asking for is the bare-bones most basic use case. They even have services setup to make this kind of thing easier, like the Simple Workflow Service, Messaging Service and Simple Que Service.

    high-level introduction to workflow service:
    http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html [amazon.com]

    recipes using workflow service:
    http://aws.amazon.com/code/2535278400103493 [amazon.com]

    • SWF Appears to be much more interested in helping me manage clusters of instances than about streamlining the lifecycle of a single customized spot instance from inception to termination.
  • by Lally Singh ( 3427 ) on Friday November 29, 2013 @04:12PM (#45557317) Journal

    Here's how I ran my PhD simulations on EC2:
    - The AMI downloads a manifest file at startup.
        - The manifest has one record per line, two fields per record: the s3 URL of a .tar.gz to download, and the path to download it
    - The AMI then runs a shell script (/etc/run.sh) that's been put there by a manifest entry

    Shell scripts upload new files to s3 (e.g., /etc/run.sh) and have ec2 run new VMs. When the VMs are loaded, they're running everything I need, ready to go.
    Other shell scripts stopped/started experiments on these VMs.
    Other shell scripts shut down the VMs when I'm done.
    The scripts did little more than scan the appropriate machine list from the ec2 tools and ssh into them with a specific command.

    At the end, I had some of the experiment-specific scripts quickly have git clone/pull in files I was changing quickly per experiment.

    All of it worked really well for me. Nothing fancier than the ec2 command-line tools, bash, ssh, & git necessary.

  • by Anonymous Coward

    I have used MIT's starcluster [mit.edu] In the past for something very similar to this workflow. It provides a very user friendly interface for EC2 spot interfaces for almost the exact workflow you're looking for. They provide AMI's you can customize and a relatively well documented set of commands to easily launch spot instances.

  • Docker looks promising, but there are other existing services stacked on EC2 that address the needs of science workloads. PiCloud does exactly the things you're asking for: http://www.picloud.com/platform/ [picloud.com] . And the folks at Cycle Computing use Condor to manage the largest jobs ever run on EC2: http://www.cyclecomputing.com/ [cyclecomputing.com] . I'm still working on my own stuff based on Groovy and Condor which I call Gondor, but it isn't at all ready for others to use. One thing I have found to be great is that there
  • We have a product in development that does just this - it can spin up spot nodes with the best price/performance ratio, dispatch tasks and restart them if a spot node fails. With lots of other goodies.

    Drop me a note if you're interested: alex.besogonov@gmail.com
  • putting aside my slashvert suspicions of the post, (hard to see how you could have chose AWS at all and be so clueless )

    I've done this kind of thing a lot. Here's my approach

    1. Fire up an EBS backed AMI from an existing stock version of your favorite OS ( ubuntu 12.04 for me just cos i use it on desktop and can't be bothered with differences)
    2. customize it with your own shit
    3. include in the /etc/rc.local a script to customize things further.. and because you don't want to faff about changing the AMI every

  • by austingeekgirl ( 1113797 ) on Friday November 29, 2013 @06:05PM (#45557901)

    http://star.mit.edu/cluster/ [mit.edu]

    The rest of it is easily scriptable. I have some ebs based AMIs that on bootup, connects to a central server,
    registers itself (ticks up a text file, and adds itself to /etc/hosts).

    If you combine starcluster for generic cluster management with the existing Amazon provided tools
    http://blog.roozbehk.com/post/35277172460/installing-amazon-ec2-tools [roozbehk.com])
    this is really only a days worth of scripting and testing.

    There are also several public AMIs on Ec2 that are oriented towards scientific computing.
    http://www.google.com/search?q=ec2%20ami%20scientific [google.com]

    This is my day job stuff.

    • Second on Starcluster. Very easy to get up and running quickly. It is well documented with a good plugin system. If you do scientific computing, then you are probably familiar with most of the tools that are built in; SGE, ipython, nfs, etc. Aside from the provided Amazon tools, I find the boto(python) library to be helpful if I need to interact with s3 or sqs.
    • One of the StarCluster plug-ins is for Condor which is supported in their latest AMIs. Perfect for me.
  • Check out Cycle Computing's CycleCloud product: http://www.cyclecomputing.com/wiki/index.php?title=CycleCloud [cyclecomputing.com] They offer meta-scheduling products specifically for managing HTCondor pools in AWS. The Cycle team works closely with the HTCondor team and supports loads of scientific projects. Their products have historically been free for academic use.
  • As others have pointed out, deploying EC2 instances automatically is fairly easy using the well-documented EC2 APIs.

    The difficult part about distributed computing is synchronizing the work between available instances. For this, you might want to look at RabbitMQ [rabbitmq.com] or other queueing servers. One way to do this would be to have one thread (on your computer) generating problem instances, while you spawn spot instances on EC2 as desired, which consume the work and report the results. I suspect you could accomp

  • If you're willing to look beyond AWS, there's something called Manta out there (http://www.joyent.com/products/manta). The data rests on some servers, and you submit UNIX map/reduce jobs. The jobs are run on the nodes where the data is resting, you get a full UNIX environment, and you only get charged as you'd expect (compute time, combined with the cheaper at-rest time). It might be a better fit for what you're doing than your proposal, plus it'll likely be faster too due to reduced data movement.

  • Look up cloudify on cloudifysource.org.

    It enables spinning up machines on the cloud of your choice (including EC2). Then it installs and configures your software on those VMs. Finally it monitors all processes that you request it to monitor, including listening to exposed custom metrics, e.g. over a jmx port.

    In your case, when your experiment ends, if your software exposes some api or metric that can indicate that, cloudify can take that as a trigger for shutting down or spinning up the next experiment.

    A ni

  • I strongly recommend this command line tool. With this, you can do all those operations and more, and in a sensible and uncluttered fashion:

    http://www.timkay.com/aws/ [timkay.com]

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...