What has been the most common question asked of me by filmmakers when I am out at group events?
Simple: “Do you use AI to do your work?” Yes, that’s the question. And maybe you wonder about that, too? I have a quick answer, which can be contained in one word, “No.” I do not use AI to make financial models, or to write analysis of audiences, nor do I use it to track and report on how the industry is working today.
Why is that?
First: AI (more properly called LLM or Large Language Model) is not really an intelligence. And, before I go further, I do not want to pretend I am a database expert, a computer systems expert, or a newly minted AI expert. Nor do I want to be. (Those who are experts in that tell me it is closer to a prediction engine based on a pool of available data, than an actual intelligence). What I do want to be, and am, is an experienced motion picture and filmed entertainment analyst that focuses on how audiences search, find and consume films (and why) and I focus on how films earn in the marketplace, based on real world consumption by these consumers and based on real world deals (which are many and layered). I use this work to aid filmmakers in their preparation of films for finance, to be ready for a successful result from production while dodging as many risks as possible, and even more importantly, to be ready for successful business with their films.

So, AI is not a brain, or anything like a brain. It is a massive pool of information that is programmed to appear to be an intelligence, when it is more readily a query system with very long tentacles and crafted to make you think you are in a dialogue with a sentient being.
Be aware, I am not talking about creating videos, which is not my purview here, but I am talking about things like modeling (not “predicting”) the consumer activity a film achieves, the box office, identifying the target, sometimes secondary and tertiary audiences, and the like. Since this is what I do in a careful fashion in my work, that is likely what folks asking me if I am using AI, must be talking about.
Can an LLM be fed with all box office of all films of all time to run high-end modeling on all that data? Potentially, yes. But where is your prospective film in that pool of titles of all time? Like any search engine, you need to guide it closer to your answer, but then why does every LLM I have seen warn you that there could be f*ck*ps in the result, and may not be reliable? What metrics do you feed in? Is genre great? But, did you know that under Horror, there are sub-genres that work well or poorly with differing audiences, and on what basis? With a new pair of socks, key market analytics are pretty simple, with only a few differentiating fillips or filigrees possible. Is this film another pair of socks with a new orange and green pattern? No. It is something much more complex. A prospective film, even a formulaic genre film acts in a realm of human experience that socks cannot even approach. A film is born oftentimes of a unique mind, or a group of minds. It is a very, very complex creative activity pursuing a singular or shared vision. Socks can be marginally unique from each other, but socks cannot approach a vision. An experiential product can touch you internally (we use “heart” often to talk about this experience), can move you and make an impression on you that lasts for a very long time. Can socks touch your heart? Likely no, but your grandmother, knowing you suffer from cold feet and knitting warm and particularly soft socks in the special favorite color you told her you loved when you were eight, those socks can touch your heart. Socks are a static product; your grandmother is experiential.
So, can an LLM duplicate these “heart” experiences for you? Only in the most insipid sense. So, movies touch the heart with experiences that can feel real and be retained in your real memory.
Then, can an LLM duplicate the complexity and leaping connections of the human brain? If we think of a Rube Goldberg machine, or The Wizard of Oz, it can approximate that, or even fake it for you, but can an LLM work hard on a project problem, then go to sleep and have the solution come to it in a dream, or upon awakening?

So, Artificial Intelligence, AI, an LLM can be called an imitation of intelligence. They sell it as intelligence, when it is actually a tool to spread knowledge and certain capabilities more widely. But it cannot do that without engorging massive amounts of knowledge to pull off this trick, this magic. Where do they acquire this knowledge?
I often tell clients that there is an iceberg under the water of the work I do, and that the part under the water is cogitation, or “thinking time.” My first task when taking on a project is to burrow into it so that I become like a weaker clone of the writer/producer/director, whatever that combination I am working for, and learn from them and any materials, what this project is, what the vision is. I feel I have accomplished my task when I can pitch the project back to my client and they find I have not missed any of the key nuances of the vision.
I have had clients use LLMs to read their scripts, which concerns me multiple ways, firstly being why would you allow the LLM to train itself on your intellectual property? And, secondly, it has appeared to stay awake throughout the script, and regurgitates it well, but it cannot make mental leaps outside of the context of the script and outside of the pool of data at its disposal. It does not invent, it cannot twist with the internal consistent gut check of a writer doing the dog work that is writing, or the work of the director or producer helping the writer heal a gap in the story, or run it through to its unique conclusion within the context of the mind that is creating it. The LLM that reads a script can become more stuffed or programmed, more Rube Goldbergian, but it will always be “other,” outside looking in, like a “replicant” with implanted memories, maybe even with photographs to support some of those memories, but always artificial.
I doubt an LLM can cogitate extensively on whether Deckard is or is not a replicant himself if those ideas were not included in its data pool.
So, no, I do not use AI to do my own work, but in defense of some AI capabilities, it can put search engines on steroids for me by aggregating wider information pools that can surface information that was heretofore harder to achieve. But the cost of the presentation of this as a panacea for lack of actual intelligence, or even for laziness, is seeming to be far too high in infrastructure and in energy consumption (that can swamp a town, and like in Memphis, TN [a town in my heart and family], can quite likely poison a population.)
In my years in the business, I have lived through quite a few bubbles and bursts in the outer and in the inner (filmed entertainment) economy. Is this another bubble, an AI bubble? I think it is acting like a bubble right now as the greed of some are seeing a great land-grab opportunity, but it might well settle down to be a unique toolset that aids multiple levels of this community to create their films more efficiently and more successfully, like the work that Steve Jobs put into building George Lucas’ “Droid Works” into Pixar, but I hope and believe it will not replace the incredibly hard creative work that is the real value at every level of this business.
A note about Pixar, it had a BIG staff doing programming constantly, drawing, animating, all the elements of digital animation. It was not a machine with a couple of buttons on it.
Two other thoughts:
I was there just before Kodak closed the company. In the two years before that day, I was doing and delivering intelligence on the penetration of digital capture, of great concern to the VP I was working for. My prediction to her and her cohorts in one word: “Tsunami.” Like later telecommunications executives, everyone around the VP pooh-poohed and looked for doors out of the room. Then, I called her one day to discuss further work, and it just happened I was the first person she talked to back from the devastating all-company meeting. And in shock is mild for what the closing of Kodak did to so many thousands. None of us wants that kind of paroxysm to sweep us up.
And, just recently, Jeff Clanagan, who I did some work for at Codeblack, now heading Hartbeat, posted a long and passioned LinkedIn letter encouraging all within hearing of his voice to get AI education before they get swamped, maybe like the folks at Kodak were swamped by forces at large.
You may disagree mildly or even vehemently with me, and that is okay. Right now, I am just staying dedicated to the work that is this business, which never fails to challenge as it evolves.
Onward and Upward for all of us.
Jeffrey Hardy
Fill out this brief form to sign up for our newsletter. I try to make them USEFUL.

Leave a Reply