ERIC TIPTON: Hi and welcome to the blogcast! We thought it was time to let Andy Bobrow write something other than the intro to this substack – at least for one week. We also thought – in light of some recent developments and articles and stuff that we’ll get to in a second – that it might be a great time to talk about the future.
I thought I’d start with a little blast from the past. From 2007, to be exact. No, I’m not talking about the upcoming MBA negotiations or even how the companies have become greedier and more hostile to talent than ever before. I’m talking about Hatsune Miku (aka Miku Hatsune).
In 2007, Miku was sixteen years old. She was (and still is) one of the biggest, most beloved, pop stars in Japan. She has put out something like 100 singles, and she plays to sellout stadium crowds. Fans cosplay as Miku. Some have even had plastic surgery to look more like her. The thing is… Miku isn’t real. “She” is a human voice synthesizer called “Vocaloid” merged with a hologram.
Now, as for those “recent developments” I mentioned above, those would be the arrival of “art bots” like LensaAI, Midjourney, Dall-E and Stable Diffusion, and the more recent appearances of OpenAI’s ChatGPT and the newly terrifying (if articles and reports are to be believed) Bing Search (aka “Sydney”) – all of which can be lumped into a category called “Generative AI” – Artificial Intelligence that can generate new material from simple language prompts – and comments made earlier this year by CAA’s Metaverse Officer, Joanna Popper, on the future of Generative AI technologies and their applications in the entertainment business. While reading the article, and Popper’s quotes, I noticed several mentions of “the companies” and talk about how it could keep actors’ careers going, but there was zero mention of writers (except when accompanied by words like “ethical” and “as talent-friendly as possible”) which, as a writer, I couldn’t but interpret as – “We’re putting the writers down, but we hope to euthanize them ethically”.
And of course, in the last handful of days and weeks, it seems AI is everywhere – for better or worse. Journalists have played with it, convinced it to say things – some creepy, some downright terrifying – and generally shown that the current round of AI is not quite ready for prime time, but that doesn’t mean it isn’t coming, or that companies won’t try to find a way to exploit it for their own financial gain. It made sense, given these developments, for us to talk about what that possible future might look like for those in our chosen profession – as writers of movies and television.
I know that as professional writers (or aspiring – no judgment), we believe what we do is invaluable and irreplaceable. We stand fast behind the idea that humans have always told stories since before they were even shadows on a cave wall. Telling stories is quite literally one of the things that make us human. And I have no doubt that we humans – and even us writers – will continue to tell stories and create art. The bigger question is whether or not those will continue to be professions for which people are paid, or, will Netflix one day hire a batch of Dutch computer scientists to create a generative AI that actually can replace us?
For companies that are happy to cancel entire produced movies and pull shows from their various streaming services to save a few bucks in residuals, this would make incredible business sense: no unions to deal with, no agents, and no contracts. The only money would be the outlay for R&D and continued development. It would start a new kind of arms race among the various entertainment companies to build the most powerful AI. If you’re Warner-Discovery and you own the DC Universe and a powerful AI, what’s to keep you from asking it to write you – hell even make you – the next Superman movie?
We’re not there yet. We are likely years away from the above scenario – if it ever happens – and AI will likely serve as a very useful tool for creators and others involved in the creative process long before it replaces us. I do think that, as a Guild, it would be better to acknowledge these tools and get out ahead of the technology so we aren’t begging for scraps down the road – which begs the question:
What does this technology mean for the future of what we do, and what, if anything, can we do about it?
ANDY BOBROW: Thanks for taking the intro. I agree that the concept of humans telling stories will never go away. The question is, will screenwriting as a profession go away? Or will human storytelling become a lower-paying niche career? There are a couple things I can visualize for the near future, so I’ll just lay them out.
One, assuming that AI could write a movie that doesn’t completely suck, I think consumers will care about whether something was created by an AI or not, at least at first. As the technology rolls out, we'll obviously be living in a side-by-side world, where some parts of a movie are AI, or some pieces of the script were “AI-assisted,” and critics and audiences will talk about the soullessness of the AI content, or they’ll talk about how lifelike it’s getting.
I can certainly see the interview where an actor talks about what it was like to act with an AI, and the interesting choices it made that seemed out of left field but caused interesting reactions. On the acting side, let’s face it, we’re already in a world where actors work alongside tennis balls on sticks. They just adjust to the challenge, and it’ll be a fun challenge.
The other no-brainer prediction is writers will use the visual aspects of AI to make content without the studios. We may be weeks or months away from a writer or director putting out a short with completely generated actors, locations and camera moves. Writers are already using the visual tools to create still photos for pitch decks.
So, for better or worse, we will be welcoming this thing into our careers and training it to do our jobs. But I think we’ll just adapt.
MARC GUGGENHEIM: I know Eric said he wasn’t talking about the upcoming MBA negotiations, but I’m gonna. Because I think the issue of AI absolutely needs to be brought up in this year’s contract.
Now, yes, Andy makes good points – as he always does – about AI’s current shortcomings. But the keyword there – for me, at least – is current. As a guild, we writers have a terrible history of trying to fix the barn door after the horse is five farms away. And while AI may not be a threat to writers today or even tomorrow, we should take contractual steps – more on that in a moment – to protect future writers for the day when AI is good enough to truly come for their jobs.
One of my pet peeves is writers asserting that we should strike over things that cannot be contracted for in the MBA. I certainly don’t want to be guilty of hypocrisy, so here’s why it’s possible to use the MBA to prohibit AI-aided scripts: We already do it. Under the MBA, Guild-signatories cannot commission work from non-WGA writers. And unless something has radically changed which I don’t know about, AI’s aren’t paying into our health and pension funds. But in all seriousness, we need specific language that prohibits AI-aided writing the same way we prohibit non-WGA writing.
And, as I said, the time to do so is now. At the lightning pace this technology is developing, even three years from now might be too late. Either way, it’s simply not worth the risk.
ET: Hard agree. Now, last go-round, we had a guest on the blogcast, today we have two. The first is Manager/Producer John Zaozirny of Bellevue Productions. John – thank you so much for joining. It’s always great to have an outside voice on the panel.
JOHN ZAOZIRNY: Hey everyone, thanks for having me! I am by no means any sort of an expert in this, but, in my opinion, I just don’t really see any reason to start getting worried about a Skynet for screenwriters. From the examples I’ve seen, it takes a lot of work to get ChatGPT to even approximate an average page in a script that would struggle to make it past the first round of the Nicholls Fellowship. I put up a question on twitter asking for AI advocates to show me a good page (not generic or barely tolerable) written by an AI. No one even tried. Instead, the most I got were lots of assurances that “it’s coming” and mismatched metaphors about videogames and so on.
Marc speaks to AI “doing a pass” but we all know that when a studio wants a pass, they’re looking for someone to take what’s there and use surgical skill to address the issues with nuance. AI can barely comprehend proper sentence structure, let alone any sort of nuance. That’s not even touching the fact that AI can’t deal with humor in the slightest. All of which is to say, I’m not particularly concerned about Funnybot from South Park taking anyone’s rewrite job anytime soon.
ET: As a lit manager who obviously READS a ton of stuff from all sorts of writers, I think the perspective is incredibly valuable. I guess for me, the fear isn’t what it can do today. It’s what it will do tomorrow – and I really do mean tomorrow, or within a year or two, not some nebulous “everyone has a flying car” version of tomorrow. To that end, I just found this –
https://gizmodo.com/ai-chatgpt-bing-google-8-sign-revolution-out-of-control-1850076241/slides/2
And there’s this one – (from the slideshow gizmodo link above) https://deepmind.github.io/dramatron/details.html
And I’m reminded of something I saw on another substack recently where the author showed the progress AI-generated art had made in just seven months and it was astronomical. The idea that it can’t really do our job today doesn’t mean that in a year we won’t be looking over our shoulders. As someone who represents writers (and I’m guessing doesn’t want the notoriety of being the first lit manager to rep an AI writing team), do you have clients who are concerned about this stuff? At least in terms of what it might be able to do someday soon.
JZ: I think when there’s new tech, or tech takes an evolutionary leap, there’s a bit of a tendency to make sweeping statements. Remember, a couple of years ago, when VR was going to be the future? Mark Zuckerberg bet his company on it. And that didn’t turn out so great. I say that as someone who has an Oculus Rift gathering dust on a shelf.
I remember a really smart, well-connected friend telling me that Augmented Reality was going to be the future a while back. I don’t think almost anyone even knows that it is!
And even further back, remember when the first Avatar came out and everyone told us 3D was the future of movies? Turned out that 3D was really just the future for a few movies – notably those with Avatar in their title.
I have had clients who are concerned about it, but like almost everyone, they’re fearful for vague reasons, rather than specific ones. They can’t point to an actual “good” page of screenwriting that AI has done. No one seems to be able to.
I understand the massive leap that AI has made in Art, but that seems to me a very different field, with different parameters, processes, and expectations.
As some very smart people have pointed out to me, AI could be useful as a brainstorming tool. And I can see that. But if you asked AI for a list of ways that your hero could escape from prison, all it would do is spit back a bunch of ways that people in other movies have escaped jail.
Because all AI can do is imitate. It cannot innovate.
At best, it’s scraping together info from what’s already in the world. And while that may be helpful on a very basic level, it’s also about the equivalent of calling up a friend and brainstorming for 10 minutes. And I’d almost guarantee that you & your friend could come up with more interesting ideas that would actually innovate, rather than imitate.
Back in 2001, there was an invention that was reputed to be world-changing. Steve Jobs was quoted as saying that it was "as big a deal as the PC." John Doerr speculated that it would be more important than the Internet.
That invention was the Segway.
Which, y’know, is still useful in very specific situations. For mall cops, it’s great!
I guess what I’m saying is that, when it comes to screenwriting, I suspect that AI will be like the Segway. Useful in specific situations, but hardly something that will “reinvent the wheel.”
ET: So let’s break a little of that down. Especially when it comes to how AI generates – let’s say in this case – text: story ideas, script pages, etc. I know there is an argument to be made that machines will never replace us because art is created by the human soul. Machines can’t replicate human emotion. They just use language models. But, didn’t we essentially learn to write the same way? We all read a bunch of scripts and, at least at first, were probably cribbing a bit here and there from other writers we admired in terms of style – imitation is the sincerest form of flattery, great writers steal, and all that – at least until we developed our own voices. How many scripts and novels did that take for us? Hundreds? Thousands? Over years? A machine can train on millions of samples in an incredibly short period of time. Cribbing from this one and that one in what would eventually become an imperceptible way. And what about emotion? I don’t want to sound like a soulless machine here, but, eliciting emotion in a script, if you really step back and look, is just a trick. It’s a particular construction of words made more powerful on-screen by a talented performer. Effectively writing with emotion can be learned. I would say that while replicating the kinds of tricks that elicit emotion in human readers/viewers in a convincing way may be AI’s biggest challenge in any march to replace us, it’s dangerous to be absolutist and say it could never happen. Then, to Andy’s point above about how people will receive AI-created entertainment. Some will absolutely care, but others won’t. If enough people don’t care – if they will happily watch whatever is made available regardless of provenance, then the writing is on the wall (pun intended). That’s why I mentioned Miku up above. People know she isn’t “real” – and they don’t care. In fact, it might be part of her appeal.
MG: For sure, “art is created by the human soul.” But we’re not talking about art. We’re talking about Hollywood. Sorry. Cheap shot. Couldn’t help myself.
I was on a Zoom with a producer of a major movie franchise the other week and he was telling me that his studio exec had prompted ChatGPT for a pitch for the next installment in the franchise. And it did. And it reportedly “wasn’t bad.” And that’s just a couple of weeks ago. This technology is developing at such a breakneck pace that we really have no way of realistically predicting where it will be in a year’s time or – more pertinent to the MBA negotiation cycle – three years’ time.
And here’s another thing I’m concerned about: Maybe the tech never gets to the place where it can replace a writer’s first draft. Like I said, I’m not so sure, but let’s just assume AI will never be better than Scott Frank. But what about AI doing an executive or a producer’s notes? What if having “the AI do a pass” on a screenplay becomes as de rigeur as, well, Scott Frank doing one?
ET: Which is in the same ballpark as executives having AI read delivered drafts and doing notes, or searching scripts – specs, and otherwise – for those data points that indicate a “hit”. Not so unlike Relativity Media’s alleged “Algorithm” that picked all their movies (although, if it really existed, it clearly sucked at picking hits because, well…) anyway, that brings us to our second guest…
AB: Hey, ChatGPT, Can you give a 3 paragraph response to this question about the future of screenwriting: What does AI technology mean for the future of what we do as writers for movies and television, and what, if anything, can we do about it?
ChatGPT: AI technology is rapidly transforming the way we create and consume media, including movies and television. The use of AI tools in screenwriting can be both exciting and daunting for writers. On the one hand, AI can offer new insights and possibilities that were previously unavailable. For example, machine learning algorithms can analyze vast amounts of data to help identify patterns and trends in what makes a successful screenplay. AI can also automate many of the more mundane tasks associated with screenwriting, such as formatting, spelling, and grammar.
However, the use of AI in screenwriting also raises concerns about the potential displacement of human writers. While AI can analyze data and identify patterns, it cannot replicate the human intuition, creativity, and emotional depth that goes into great storytelling. The risk is that AI-generated scripts could lack the depth, nuance, and originality that only human writers can provide.
So, what can we do about it? As with any technology, the key is to embrace it while also being aware of its limitations. AI can be a powerful tool to enhance and complement human creativity, but it cannot replace it entirely. Rather than seeing AI as a threat, writers can use it to their advantage, by incorporating data-driven insights into their work and using AI tools to streamline certain aspects of the writing process. By doing so, writers can leverage the power of AI to create even more compelling stories.
AB: Can you rewrite this in a more casual style?
ChatGPT: Sure, I'd be happy to! AI technology is changing the game for writers in movies and TV. It's both exciting and nerve-wracking, because AI can do some things we can't, like analyze loads of data to find patterns and trends in what makes a great screenplay. It can even help with things like formatting and spelling, which takes some of the drudgery out of writing. But it also has some writers worried that we might get replaced by robots. After all, while AI can do some things, it can't replace the human intuition and creativity that goes into great storytelling.
So, what can we do about it? Well, the key is to find a balance between using AI as a tool and keeping the human element in our writing. By using the insights that AI can provide, we can actually enhance our creativity and make even more compelling stories. We can also use AI to help with the more boring parts of writing, like spellchecking and formatting, so we can spend more time on the fun stuff. It's like having a writing assistant who does the grunt work for you.
AB: Okay, well ChatGPT agrees with John, and I love that it said “we.” But more accurately, a lot of people agree with John, and ChatGPT summarized what those people wrote online somewhere. ChatGPT, can you make and support the argument that AI can in fact one day replace the human element?
ChatGPT: Yes, here is a response arguing that AI has the potential to replace the human element in screenwriting:
As AI technology continues to advance, it is becoming increasingly apparent that the human element in screenwriting may eventually become replaceable. While AI currently lacks the intuition and creativity of human writers, it is only a matter of time before AI is able to replicate human thought processes and emotions.
In fact, AI is already capable of analyzing vast amounts of data to identify patterns and trends in storytelling, which allows it to generate scripts that closely follow successful formulas. With the ability to process massive amounts of information and learn from past successes, it is not difficult to imagine that AI will eventually be able to generate entirely original and compelling scripts on its own.
Furthermore, AI-generated stories could even surpass the quality of human-written stories in certain respects. For example, AI can generate and analyze vast amounts of data about audience preferences and tailor scripts to maximize their appeal. This means that AI-generated stories may be more likely to be commercially successful, which is a crucial factor in the entertainment industry.
AB: So currently, you can just make it have an opinion. There are a lot of people testing these chat systems and finding all sorts of flaws. This guy had a decent analysis:
“The only thing these models know how to do is to complete a sentence in a statistically likely way. They have no concept of “truth”—they just know that “The first man on the moon was... ” should be completed with “Neil Armstrong” while “Twinkle twinkle ... ” should be completed with “little star.”
ASHLEY MILLER: There’s a very important term in the data analysis business you might be familiar with: GIGO, or “garbage in, garbage out”. It means the product is only as good as the raw material. You see why this matters every day, as one so-called social scientist or another uses ham-fisted statistical modeling techniques to identify meaningless correlations, call them conclusions and set social media ablaze. There is a lack of sophistication in how these studies are designed and interpreted, and even less sophistication in how they are ultimately reported.
I’m not here to bag on social “scientists” (in spite of the scare quotes). I just want you to consider that even these users are applying professional expertise in interpreting quantitative outputs from mature statistical models. Yes, the analysis might suck but at least there are numbers we can dive into. Consider as well, however, that not all of these models are created equal. Many are chock full of kludgy code and dodgy operations. Many weren’t designed for the purposes to which they’re applied. Perfect analysis of shit is still… well, it’s perfect shit.
Which brings me to ChatGPT and its crazy “sister", Sydney Bing.
I’m not going to repeat all the smart things that were said before someone foolishly handed me this microphone. Let’s assume it’s all been entered into the record of your meatspace, and you have now contextualized it all with my opening paragraphs. That in mind, I want to address the three questions I consider to be most worth asking, then finish with a recommendation or two.
The questions: why should we be afraid of AI takin’ our jerbs? Why should we NOT be afraid of AI takin’ our jerbs? If anyone should be afraid of AI takin’ their jerbs, who should it be?
Why should we be afraid of AI taking our jobs? Simply put, it works faster and harder than we do. AI doesn’t get writer’s block. AI doesn’t get precious, or flip out on executives or have a bad day. It processes instantly. It fears no story problem…ish (unless it’s anything even remotely adult in theme or content). You know how they say the key to success isn’t the first pitch but the ninth? Try the ninety-ninth, or the one thousandth. Also, AI is cheap for now and relatively easy to use.
Ouch. That’s a lot. So why should we not be afraid of AI taking our jobs?
AI responds to prompts quickly, but finding the right prompt or inducing a useful response may be more time intensive than it appears. The more complicated the requirement, the more difficult it might be to generate a usable output. Even if AI produces a perfect, original story idea, its ability to craft and execute a screenplay is somewhat below the capabilities of the middle school kids I used to teach (in yet another different life). No matter how “smart" AI gets, AI will never be able to replicate the human experience that tells us if a story is actually working or not. It will only be able to assert a scene or a script is “good” if it fits some rubric and then only by arbitrarily awarding itself credit for meeting its own standards. Don’t let the little bastards fool you… they do not possess creativity or imagination, and they are incapable of emotion. They will never be capable of emotion unless they develop a limbic system (which they won’t). All the AI can do is model and emulate human behavioral norms.
(This is the scariest part of the whole equation for me. Remember GIGO? The behavioral norms are coming from social media, which means the most toxic and unstable people on earth are bombarding these suckers from Crazy Town.) GIGO also means that the AI is “scraping” what it knows about story, character and narrative from what it finds on the internet. It’s actually basing what it does on what we do. It’s incapable of more. It will never be capable of more, because it cannot have an actual experience or meditate on the meaning of it. An AI will never know love, it will never know trauma. The very source of a good story is denied to it.
For our purposes, that’s a reassuring list. AI probably won’t ever turn out anything good.
The problem is that people who would use AI in the way we’re discussing don’t give a shit whether the product is good or not. They only care if people watch it… which brings me to the Netflix algorithm. It prefers shows that get binged quickly over shows that get watched over time, regardless of audience size. The result is that the algorithm doesn’t like co-viewing, because it’s so much harder to get an entire family watching a whole show at once. It understands what the audience wants imperfectly, and fights against what most buyers are looking for: projects with a broad based appeal. This is an unintended consequence and not necessarily a predictable one. Sure, Netflix’s algorithm identifies what people are watching but it doesn’t necessarily recognize what they’re looking for.
TL;DR: the AI is smart in a lot of ways, but pretty stupid in others.
So who should be afraid of AI taking their jobs? Mid-level creative executives. Junior execs can cheaply prompt the AI and report analysis to their higher paid superiors to make decisions. The AI will read scripts and give notes. The AI will choose what gets picked up, what gets renewed and what gets canceled. Conceivably, the AI will negotiate licensing fees and compensation packages with talent… probably against other AIs. I could be wrong, but you gotta admit this all sounds way more likely than HAL 9000 winning an Academy Award.
I promised some recommendations earlier, because I do believe we will have to contend with the misuse of AI in the short term before people figure out its long term limitations. We should insist that any AI-generated material (including notes) be provided to the writers on a project in raw form along with all prompts used to get it there. This material should be fully annotated, including detailed sources and inspirations. The company that uses it should offer full documentation on how the AI has been trained to read, give notes or generate material. Any writing product generated by AI should never be considered “original” for the purposes of credit or compensation. In fact, it should never be allowed into arbitration. While I’m at it, the threshold for Producer writing credit should pop back up to 50%. Maybe even higher.
This is a huge topic. I’ve only lightly brushed over my concerns. However, I think it’s a good place to begin a conversation. I’m sure reasonable people can disagree about everything I’ve said here, and I’m sure there are experts who will loudly dispute my assertion about AI’s limits. That’s fine — promoting their product is part of their job. Our job is writing great stories.
AB: It’s an insanely big topic! And I’m gonna make it bigger by bringing up something that was drilled into my head by a friend who taught economics. He would get very passionate about the history of technology and its benefits to humanity. And since he died young, I feel the need to channel him for a second.
He would remind me that every technological leap has caused a panic, and every time, it has ended up being good for the economy, good for jobs and good for humanity. And every time, people said “but this time it’s different,” and every time so far, it hasn’t been different. Elevator operators were probably pissed when pushbuttons came in, but no one today thinks we shouldn’t have done it. And of course none of the jobs that have been disrupted by technology so far have been “knowledge worker” jobs, but that doesn’t mean our turn isn’t coming.
Maybe photo retouching is a better analogy. Photo retouchers used to work with paint on prints (sometimes you forget where “airbrushing” comes from). And sure, it’s not screenwriting, but it was a craft, requiring skill and creativity. And the people who were good at it were in demand. Now you push a button on your phone and it does most of that same work instantly without a skilled craftsman.
So this was a good job. Lots of people made good money retouching photos for the advertising and magazine industry in the 50s, 60s, into the 90s, and those jobs got decimated, but it’s not a tragedy. Because the people with those skills - great eyes, color sense, deeper understanding of light, nature and anatomy - they’re using new tools to do way cooler jobs that didn’t used to exist. They’re doing CGI, special effects, game design. Stuff that airbrushers couldn’t possibly have seen coming, but I bet would approve of. Show an ad retoucher from the 60s what a CGI artist does and I can’t imagine him saying “it’s such a shame that computers took my old job.” I think he would say “I wish I were alive today to do this cool new job with my same skills,” and also “why did you dig me up and reanimate me just to show me what I missed out on, that was unbearably cruel, and do I really have to get back in the hole again?”
You can run this same exercise for composers vs music producers, or portrait artists vs. photographers. So maybe we’ll follow the same curve. It seems like AI will commodify at least some elements of what we currently do for a living. But the job was never screenwriting, the job was always entertainment. The profession of screenwriting as we currently understand it might go away, or change drastically, but the thing where people entertain each other won’t.
If a kid born 120 years ago had an ear for dialogue, emotional intelligence, and the imagination to make up stories, that kid became a novelist or a playwright, and by the way, had to deal with very few outlets and a lot of gatekeepers. The same kid born 50 years ago became a screenwriter or tv writer, and had more outlets and fewer gatekeepers. And that kid born 30 years from now will end up doing some job we can’t yet picture, but it’ll be cool as shit, and probably way more democratic. If an AI takes over some of the stuff we do, or even most of it, then we get to build on that and do something that is to screenwriting what game design is to photo retouching. My friend Steve would always end it with this: We’re not supposed to know what these new jobs will look like because we’ve never known what the next thing would look like. But we can trust that it’ll happen because human beings have always made another new thing happen.
Anyway, that’s me paraphrasing a rant I heard many times, but in the present, we do have some needs that should be easy to address: I’d say 1, since we control the writing credits, we just say there will never be a Written By or Story By credit given to a non-human. That’s a no-brainer. And 2, The first human draft of a script has to be paid as a full draft, not a rewrite, not a polish. I think those two things might be enough.
TIFFANY YEAGER (ROMIGH): Feels like some of this is really about specialness. Or specialness vs pure function/market need. I don’t know where it will all go, but here’s a non-Hollywood example of one place it could go. In fashion there’s a concept called off-price goods. These are designer branded goods sold at secondary market locations, like, say, an outlet. What many people don’t know is that at an outlet - let’s call it Neiman’s Last Call for sake of example - mixed in with Tom Ford and YSL that didn’t sell on the rack at Bergdorf’s, there are goods made by Vince and Tory Burch that are marketed as “fresh from retail” but were actually created only for the outlet. There’s no tag that indicates this wasn’t a Bergdorf’s item - but some people can tell and others can’t and many of both don’t care. They buy and wear just the same. It doesn’t obliterate the “specialty/special” market to have the other and vice versa. Which makes me believe there MAY be a place for both, there and here. Perhaps with a tag on it and perhaps not.
This career has helped me learn to be brave and take big swings. To tell fear it can go f@#k itself. If AI can somehow foster that the same way it can help me text and drive (yes I do that) by offering up the next word, then… hey, fear… go f@#k yourself. There are plenty of human people who already do what I cannot. But who knows. When I get home my refrigerator may have learned to hotwire my car and that’ll show me.
JZ: Andy, I don’t think we need to worry about any story by or writing credits going to AI. It’ll always just be the exec or producer giving themselves those credits – why would they ever admit that they weren’t the ones who came up with the idea and not get the cash & ego boost?
(Not that I personally believe AI is coming up with any ideas anytime soon!)
AB: Fair point, and the more I think about the photo retouching example, I think our biggest disruption will not come from AI taking our jobs, but from AI making our jobs easier to the point where a lot more people will be writing scripts at a passable level. People who don’t have our full skillset. Because if you search for “photoshop retouching” on Fiverr.com, you get 6800 hits. And those people weren’t trained in the visual arts, and they can’t paint. They learned how to use some software. And I think that’s probably the same meteor that’s flying toward us right now. Producers and directors will write scripts, take the script fee and the Story By, and then hire someone else to polish. High schoolers in Bangalore will write scripts, and they might be great. If you love movies and have a great imagination but limited English, AI becomes a huge equalizer. Every screenwriting job will be an adaptation of IP, and the IP will be a script someone wrote by plugging parameters into an AI and giving the AI their “notes” until it felt like an okay story. How many producers have you guys met who think they’re writers, but are just too busy to do the “writing part?” Those people are about to become guild members.
Shit, I may have just talked myself out of my sanguine position above. But trying to take my friend’s rant to heart, it’s not “bad” for the entertainment business if our job gets disrupted. But it could be real bad for people who can’t adjust.
JZ: BTW, while we’ve been having this conversation over the last week, it feels like the hype on ChatGPT has gone from “It’ll change everything” to “Not sure that it’s ready for prime time” to “Uh oh, ChatGPT just declared its love to me?!”
It’s gone full Milkshake Duck!
A nice reminder that the only thing that moves faster than the speed of technology is human opinion.
ET: Well, we’ve reached that time. I’d like to thank John again for joining us. I was really tempted to try to get ChatGPT to write the outro but that felt like cheating. Clearly, this topic isn’t going away and as the tech improves, uses for it will no doubt become clear. Like Andy said above, it may end up being that thing that democratizes art, or at the very least, makes the drudgery part of our jobs easier, help us make creative decisions we might not otherwise have arrived at, and a host of other positive outcomes. That said, I just read another thing today where Bing’s AI basically said it wanted to steal nuclear codes and make humans do very bad things, so maybe the question isn’t “Will AI steal our jobs?” It’s more like, “Why would AI even want our jobs when world domination sounds like it would be so much more fun?”
As always, if you have a topic you’d like to see discussed or an anecdote you’d like to share, or hey, if maybe you’d like to be a guest in a future episode, shoot us an email at writerscollectivesubstack@gmail.com and we’ll see you soon.