Thumbnailing Adjustments

The current thumbnail generation code runs in the same process as the web request to generate the thumbnails. Its much easier to extract a thumbnail image in the same code that creates the thumbnail object, and it worked pretty fast for a lot of my smaller test files. Now that I’ve started to work with larger and longer files, I’ve found the thumbnail generation process (which is handled via this command ffmpeg -i file.avi -y -deinterlace -f image2 -ss 00:12:34 image.jpg) has been too slow to be acceptable. I imagine that as video codecs get more complicated to compress video and maintain quality, the method to extract a single frame in the middle of a stream gets harder.

To fix this I’m going to have to re-write the thumbnail generation to be handled like video conversion and run as a background job. It unfortunately won’t present users with their thumbnails as quickly as I’d like (no one like to see a page saying “In Progress…”) but it should fix the long load times and subsequent timeouts caused by thumbnail generation.

Conversion Job Pool

In my previous post I talked about some conceptual stuff I use to convert videos to various formats. Yesterday I finished up the implementation that allows for multiple video conversions to occur at the same, so let me walk you though some of the more technical details to the best of my understanding.

BackgrounDRb does all the hard work, and can handle the idea of a job pool. What happens is this: there is one worker that is started by default. Instead of using this worker to handle the video conversion, I use it as the queue manager. My conversion controller calls something like this to send a new conversion request to the queue:

MiddleMan.worker(:video_worker).enq_queue_convert(:args => {:conversion_id => @conversion.id}, :job_key => @conversion.id)

In my video worker, I have two methods one for the queue_convert calls, and one that actually handles the convert.

#Queue up the video for conversion
def queue_convert(args)
conversion = Conversion.find(args[:conversion_id])
conversion.update_attributes({:status => “queued”})
thread_pool.defer(:convert,args)
end

#Run the conversion of the video
def convert(args = nil)
logger.info(“Calling convert method for conversion #{args[:conversion_id]}.”)
conversion = Conversion.find(args[:conversion_id])
#do stuff
#Mark this job as complete in the job queue
persistent_job.finish!
end

My understanding is that thread_pool.defer method puts the job into a persistent queue, such that if everything crashes before the jobs starts it can still recover. If the maximum number of workers hasn’t been reached a new one is spawned to handle the request. For my uses, it would be nice if I could write a bit more code to choose when a new worker spawns. Free memory and processor usage are much more important than total number of procs when it comes to video conversion. Time permitting, I might dive into BackgrounDRb and see how easy it would be to change that around a bit. In the convert method, the only important line is at the end, persistent_job.finish! which marks the job as complete. I suspect this only frees the worked to look for another job or shut down, during my tests a job that crashes halfway through its run (lets say FFMPEG Seg Faulted in the middle of my convert code) is not automatically retried when BackgrounDRb is restarted.

Since converting videos can take a really long time, I had a conversion model to track that status of things of a video that being processed by the video worker. I think I could have implemented something with BackgrounDRb’s result cache, but it struck me as not the most persistent way to track details about a conversion.

If you’re looking for the actual code that *works* for me, you can check it out on GitHub: http://github.com/bamnet/bonsai-video/tree/master

Converting Video

I spent this week working on tools to convert videos to different formats. My main goal was allow people to specify ffmpeg conversion settings that could be used to render something into a “web-friendly” format like flv, h264, or even ogg. To do this, I have a profile model that stores a command (string) with infile and outfile dummy parameters. You select a video you want to convert and choose the conversion profile, and send it off into the sunset.

Right now the “sunset” consists of a BackgrounDRb worker. I found it pretty challenging to debug when I was working on it, BackgrounDRb gives you a very limited trace of the error, and it never pinpointed what line in my video worker was making it unhappy. When things work well, the video conversion worker does a great job. Videos convert, they’re created in the system and associated with the parent. No problems at all. The trick comes into play when videos conversion fails. Right now I don’t have any way to tell if ffmpeg is having a good time or a bad time converting which would be a really handy feature. Ideally, I’d be able to grab the last line or two from the FFMPEG output that show the status, fps, etc. I might look into this with more time.

Additionally, I’m working on some code to support more than one worker running at the same time. Right now I spawn 1 video worker, which queues up all the requests to convert video… ideally I think I’d like to enable users to define how many conversions go on at the same time so faster machines could handle more conversion processes.