Can you Predict the Oscars? and why would you do so?

Can you Predict the Oscars? and why would you do so?
March 12, 2017 Laura K

Now that everybody recovered from the Oscars Best Picture mix-up, I wanted to share some info I’ve found regarding the Oscar predictions. Indeed, I’ve always wondered how close we are able to predict the winners, especially: Can we predict the Oscars with data? How accurately? Based on which parameters? and mostly, what would be the outcomes for distributors to rely on such predictions?

Of course, experts and critics will watch the nominated movies and have a feeling of how Oscar material they are. But what about mathematical models? Besides the entertaining and emotional part of the Oscar Ceremony, this event could have a big role to play. Oscars are fun but they represent also a considerable business opportunity for increasing the box office of the winners: it’s like a huge worldwide promotional campaign with a strong impact of your results.

How do you predict the Oscars?

First, the easiest way to know would be to actually make a poll and ask around for which movies would you vote for X or Y category. Like for a Presidential election. Unfortunately, it cannot be that simple for the Oscars because not everybody can vote. And the one who can, are very discrete and cannot talk about their vote intentions. So you need to look at other ways that can gauge the probability of which movie can win an award.

There are currently 3 models that have been built to predict the Oscars: one is using 2 major parameters determined by the EXTERNAL audience, like you and I, the other is using INTERNAL facts more related to the Academy or the movie industry when the last one integrates way more information, both INTERNAL and EXTERNAL, especially on the Internet:

1/ Zach Wissner-Gross and Randi Goldman Model:

This couple submitted a model that looks at 2 factors:

>> How much money a film made?

>> How much people like the film looking at its score on review aggregator Rotten Tomatoes?

The pro of this minimalist approach is to be able to predict the Oscars as soon as the film nominations are released since you don’t rely on previous festivals’ prized & awards shows. The convenient thing also is to have the data easily available to conduct such study.

This model predicts for now only the Best Picture & Best Actor/ Actress categories where you can observe trends:

  • For the Best Picture: if the film is making too much or too little at the Box office, it loses almost its chance to win the award.
  • For the Best Actor – Actress: It’s the same, you need a balance. They shouldn’t get too much or too little from the movie. Plus, it increases a lot their chances if there is a physical transformation.

2/ Brian Goegan’s model:

This model is using 3 parameters:

>> The previous awards that are given to the nominated film. Like the Golden Globes, the Bafta or awards from different guilds like production designers or editors because a lot of their members are part of the Academy.

>> The other nominations the film receives. As a trend, it’s known that a film nominated for best picture but not for for best director, editing or cinematography probably won’t win.

You have some interesting insights for the Best Actor/ Actress as well. Actors will more likely win the award if their movies is nominated for best picture when it’s quite the opposite for actresses. Better for them to be from a more “out of the spotlight” movie to have the Academy attention.

>> The critics and movie reviews as they tend to match with the Academy current tastes.

Those parameters help you refine what could be the votes of the Academy members as you cross match facts that are supposed to represent movie industry executives’ perceptions and preferences.

In those 2 models, you don’t really include the impact of the marketing and promotion that distributors deploy for their nominated movies. Of course Academy voters are not supposed to be influenced by it, but truth is, if those campaigns would had no impact, distributors won’t spend any money on them.

Indeed, what about the impact of all the interviews in the media, the Facebook live QA of the nominated crew, the backstage footages released online? With the digital space, they could be easily measured to be included into the model.

That’s the approach of FiveThirtyEight.  

3/ FiveThirtyEight election model:

This is actually a website created by Nate Silver in 2008 specialised in data analysis & forecast in media. This model is recouping all types of models and sources so you can have the more accurate probabilistic framework possible.

>> Mine the Internet activity thanks to Burak Tekin’s GOOGLE NEWS model and Paul Singman’s from TWITTER.

>> Gather all Academy related figures or facts. Basically, it’s based on the 2 model above including box office figures, Rotten Tomatoes scores & previous nominations.

>> Estimate what Academy voters like by simulating the situation and asking others to vote and rank the nominated films. This model is from James England, a model it first used for Football. The idea is to group the films into teams and you pit them against each other. Then you ask people on a website (Movielens) how many of them they saw. If people claim they saw three of four of them, they will be asked to vote.

>> Analyze what have been written about movies in the press. Allison Walker method scans the words that have been written compared to the one that have been historically used for past winners. Gary Angel approach is using data to identify what Oscars voters care about based on what it’s released in the Hollywood press like Vanity Fair or LA times. For example, in 2013 many posts were about “Iran”, “rescues” or “heists”  in the press, so it indicated why the film Argo had got a more receptive audience.

 

Oscars Model

 

Thanks to all that types of data, you can cross match different probabilities to validate across your sources an occurrence: For instance the film A will win the best picture. Indeed, the more sources validate my occurrence, the more my occurrence has the chance to be accurate. If the sources 1, 2, 3 & 6 all put the film A as the winner, then, it increases the statistical significance (meaning it’s probably true) that it will win the award.

So the more sources you have in your model the more robust & reliable results you get:

What are the outcomes of such statistical predictions?

What’s interesting with all those sources of data, is the fact that you are kind of listing all the different channels of a campaign that can very directly impact the success of your movie.

>> The emotional factor: Do people like your film?

>> The press: Does the Hollywood industry talk about your film?

>> The scale factor: How much people care about your film? How much revenues you make from if? Is it making a buzz on the Internet? Did my film make a sensation?

Channels you can actually optimize through your promotional campaign such as:

  • PR and Press junket: make the film director and the crew talk about the movie to the press and the fans.
  • Digital: spread the word with the fans sharing their support on Twitter, Facebook or on blogs and online press.
  • Target & Scale both: so you can optimize your campaign budget spends.

So, it seems there is one obvious outcome; measure how do your nominated film rank against the others and gauge from which promotional channel you can optimize to increase its chances to win the prize, and do so, while you are running your campaign.

Thanks to those different channel criterias, you can monitor your campaign in a very precise level and apply very relevant adjustments.

Indeed, why putting so much effort into that Oscar campaign? You could say all this, only for a golden statue that will end up on someone’s fireplace. Well, movie marketers might be very aware of the impact an Academy award can have on your movie. If your movie has the opportunity to be nominated and to win the Best Picture, this could be synonym of Jackpot!

As you can see below, winning the Best Picture is an incredible boost for your movie gross and life cycle. With the top 3 from 2009:

Indeed, for Moonlight, it got a 39% rise in its total domestic BO going from $15.9M to $22.2M. And La La Land also improved its BO thanks to the buzz since the Ceremony snafu.

So Oscars prediction is a real thing when you manage to cross match different types of data as from the inside of the movie industry as from what’s available online. Digital allows you to access to more data and scale the capabilities of your framework; data from many websites, types of contents, or why not international insights?

Being able to refine and improve those probabilistic models represents an amazing opportunity for movie marketers to monitor their promotional campaigns and get the most results from a movie nomination.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*