Hacker News new | past | comments | ask | show | jobs | submit login

(caveat: in practice you'd do this differently, making sure applications were prestarted/preregistered on remote nodes, ensuring your network was already connected and wrapping most of these calls in nicer function calls but for the sake of illustration i've described the naive way to do this)

    %% ping a remote node to connect to it
    pong = net_adm:ping('foo@127.0.0.1'),
    %% start my application on the remote node
    _ = spawn('foo@127.0.0.1', application, start, [my_app]),
    %% my_app is a gen_server, an otp process that abstracts
    %% async request/response and makes it look more like
    %% synchronous function calls
    %% this example assumes the gen_server accepts the JobArgs
    %% and immediately returns 'ok' as acknowledgement, but does
    %% the work asynchronously
    ok = gen_server:call({my_app, 'foo@127.0.0.1'}, {do_some_work, JobArgs})
if you wanted to distribute a number of jobs over a number of nodes you'd just repeat this process for each node and distribute your jobs however you choose, either by round robin or something more complicated like rendezvous hashing or something

this example assumes you're uninterested in the responses, but if you wanted to receive the results you could either send a pid (process identifier) along with the request that would be setup to collect the results and do any further processing or you could spawn a local process that made the actual call and blocked until it received the results

as for scheduling a job a week in the future, the naive way would be to setup a timer that, when it expires, spawns a process that can execute an arbitrary function. no one would do this, normally, however. you'd probably want to use something like sidekiq that can store the job off node and poll regularly to see if there's any jobs you should run




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: