In my previous post I talked about some conceptual stuff I use to convert videos to various formats. Yesterday I finished up the implementation that allows for multiple video conversions to occur at the same, so let me walk you though some of the more technical details to the best of my understanding.
BackgrounDRb does all the hard work, and can handle the idea of a job pool. What happens is this: there is one worker that is started by default. Instead of using this worker to handle the video conversion, I use it as the queue manager. My conversion controller calls something like this to send a new conversion request to the queue:
MiddleMan.worker(:video_worker).enq_queue_convert(:args => {:conversion_id => @conversion.id}, :job_key => @conversion.id)
In my video worker, I have two methods one for the queue_convert calls, and one that actually handles the convert.
#Queue up the video for conversion
def queue_convert(args)
conversion = Conversion.find(args[:conversion_id])
conversion.update_attributes({:status => “queued”})
thread_pool.defer(:convert,args)
end
#Run the conversion of the video
def convert(args = nil)
logger.info(“Calling convert method for conversion #{args[:conversion_id]}.”)
conversion = Conversion.find(args[:conversion_id])
#do stuff
#Mark this job as complete in the job queue
persistent_job.finish!
end
My understanding is that thread_pool.defer method puts the job into a persistent queue, such that if everything crashes before the jobs starts it can still recover. If the maximum number of workers hasn’t been reached a new one is spawned to handle the request. For my uses, it would be nice if I could write a bit more code to choose when a new worker spawns. Free memory and processor usage are much more important than total number of procs when it comes to video conversion. Time permitting, I might dive into BackgrounDRb and see how easy it would be to change that around a bit. In the convert method, the only important line is at the end, persistent_job.finish! which marks the job as complete. I suspect this only frees the worked to look for another job or shut down, during my tests a job that crashes halfway through its run (lets say FFMPEG Seg Faulted in the middle of my convert code) is not automatically retried when BackgrounDRb is restarted.
Since converting videos can take a really long time, I had a conversion model to track that status of things of a video that being processed by the video worker. I think I could have implemented something with BackgrounDRb’s result cache, but it struck me as not the most persistent way to track details about a conversion.
If you’re looking for the actual code that *works* for me, you can check it out on GitHub: http://github.com/bamnet/bonsai-video/tree/master