Our thoughts on YouTube’s game-changing thumbnail test
Improving content performance on YouTube is something the DriveTribe team constantly strives to improve. Recently, YouTube invited us to take part in its latest A/B Thumbnail Beta test. Following the test, we wanted to share our thoughts on why thumbnails are so important, and what we learnt from participating in the test.
Why are thumbnails so important?
Video thumbnail selection is one of the most important pieces of the video package on YouTube. A thumbnail is usually the first thing a potential viewer sees when browsing on YouTube, so it must provide enough information to intrigue the viewer to click through.
However, if the video doesn’t deliver what the thumbnail promised then it is a sure way to lose viewers quickly. Therefore, designing an appealing, authentic, high-quality thumbnail plays an integral part for any successful YouTube video, and channel.
What is A/B thumbnail testing?
Previously, trial and error would play a major part in determining the success of a thumbnail. There are key fundamentals which nearly all successful YouTube channels use such as: an image of the main people in the video; an image which directly relates to the content; and a key word or phrase that either entices or sums up the content of the video.
However, once your YouTube thumbnails become consistent and your channel has a distinct, successful brand identity, how can you further optimise a thumbnails performance and squeeze out those marginal gains?
An A/B beta test helps with exactly that.
During the recent beta we took part in, we were able to select up to three different thumbnails we wanted to attach to a video. Once the video was uploaded, YouTube would put a different thumbnail in front of a range of people and provide feedback on which one’s were most successful.
After the results came in, we could either select the winning thumbnail to be permanent or re-run a new test.
How long did it take to select the best thumbnail?
The results could take up to a few hours, or even two weeks in one case. Throughout our experiments we discovered that the speed of results was determined by how similar our thumbnails were – the more they differed, the faster the test will finish.
Another factor which had an impact on test result speed was the amount of video traffic received. Understandably, the more impressions a video receives, the faster a test finished. As an established YouTube channel, this wasn’t so much an issue for us, but we could see how this could impact smaller channels.
What were the results?
Measurements we received considered each thumbnail’s click-through rate (CTR) and watch time. CTRs are measured by how many people view the thumbnail and then click on the content.
We uploaded eight videos during our two-month beta trial, with almost every video performing above the average CTR rates for the previous two month. Overall, we saw average click through rates improve by one per cent.
In terms of keeping eyeballs on content, since participating in the beta, our 90-day data review revealed an instant spike, as well as an average increase in total channel views. The winning thumbnail also demonstrated an ability to increase the watch time of the videos.
We constantly assess all our processes from initial client on-boarding through to content output, seeing as many marginal gains as possible.
Marginal improvements across all areas are proven to lead to huge results.
This beta allows us to squeeze out those marginal performance gains just a little bit more. It is the right step forward for YouTube, and we look forward to it being rolled out as a permanent feature in the future.