Reading Media Narratives

Reading Media Narratives

Disclaimer: I’m ignorant about a lot things. Here’s the things I’ll admit to: I dropped political science in college, not because I didn’t find it interesting but because I never showed up to the Friday discussions of International Politics. (This would be why I also failed The Philosophy of Love and Sex. Oops.)  I dropped the journalism major because I failed Economics 101. (A writer who’s irresponsible with money? What kind of monkey shine is this?) I’ve also never made a quiche and don’t want to know how.

But I’m trying to understand how narrative works. We all know the basic structure, right? You have a beginning, a middle, and an end. You know what recent political slogan also shares those qualities? “Make America Great Again.” It presupposes that America was once great, it’s currently not, and will be great once more because of us. Simple. Unifying. Four words, even. It doesn’t track as well with “I’m with her,” which is inherently divisive, because if you’re not with her, you’re against her, a message cemented by the “deplorables” gaff. Hillary Clinton’s response to MAGA was “America is already great,” which is probably better stated as “America’s better than it’s ever been, statistically,” as the former doesn’t contain a story, just an ending– which apparently translated to half the country as no change.

Political and media narratives generally don’t share this three act structure– they are always written in the middle of things, without time to contextualize history or put a neatly wrapped bow on top of it. That happens after the fact, when history is canonized. These stories are written now.

It’s interesting to see it from a fiction writer’s perspective. Because we know, or are struggling to realize, that every story has a different set of triplets embedded within each of their narrative wombs. Every story has a Hero, a Villain, and (oftentimes forgotten) a Victim.

That might be why the most enduring religious (and political, it its own way) narrative of western culture is of Jesus Christ. Not only is there a Beginning (Bethlehem, three kings, shiny star, manger), a Middle (proselytizing, gathering disciples, miracles, crucifixion) and an End (resurrection, Heaven, legacy of Christianity) but the HVV trinity is also soundly in place: There’s a Hero (Jesus), a Villain (Original sin, or Satan, or Rome), and a Victim (the poor, the sick, the lame, the oppressed). This parabola and narrative conflict has been carefully crafted over centuries of canonization.

Ok. The most maligned and divisive phrase you’re going to hear for the next four years is “That’s how Trump got elected.” Without adding to that garbage fire of vitriol, I’m going to try and extrapolate Trump’s campaign message using the HVV dynamic, while also adding, in political narratives, no one will ever claim to be the villain, while claiming to be the victim is viewed as politically weak.

Trump’s campaign universe had all three characters in a neat package:

The Hero (Himself, tremendously), the Villain (The corrupt, backstabbing government insiders), and the Victim (The working class people who feel their diminishing industries have been forgotten).

Versus Clinton’s:

The Hero(es) (Clinton, women everywhere), the Villain (Trump), and The Victim (…)

That last box is left a little blank, although there were many possibilities to fill it– Trump bragged about sexually assaulting women, claimed Mexican immigrants were rapists, that Muslims were dangerous, that stop and frisk policies aren’t biased against POC, that prisoners of war were losers, you name a demographic, he– in no uncertain terms– victimized them.

Which ended up as footnotes in the debates, if brought up at all. We saw play out a game of intense political chess. Politically, she can’t shift women over from the Hero slot to the Victim role (whereas Trump, somehow, did by bringing out the women that claimed Bill Clinton had sexually harassed). Her immigration stance was relatively soft and seen as hypocritical in the shadow of Obama’s mass deportations, while any discussion about Muslims was either deferred to the Middle East as America’s Eyes and Ears, or avoided in an effort to escape the goddamn Benghazi trials. When BLM was brought up, specifically when police brutality in black communities was addressed, Clinton went for the nuanced approach that we’re all a bit racist (statistically true) opposed to Trump’s proclamation of Law and Order— because she probably would’ve backed herself in a corner taking a more aggressive approach due to her Super Predator comments.

Sidenote: Clinton’s verbiage is interesting to me because it’s similar to how I instinctively write certain scenes: Exposition, Dialogue, Exposition, EXTREME LANGUAGE CONTRARY TO THE PREVIOUS EXPOSITION TO INDICATE A SHIFT IN VALUE, Expository endcap. It’s clear that Hillary Clinton is a reader. Trump’s language is interesting to me because it’s entirely made of extreme language, in short, obscene outbursts. Kind of like LA Confidential.

Where was I? Clinton’s Victim eluded.

She let the Villain speak for himself, which to be fair, seemed like a reasonable thing to do. To her credit, Clinton appears to be a very sensible person and believed that voters would see through Trump’s narrative (and over 3 million more people did, but we’re not going into that right now), but, in retrospect, by not allowing Trump to speak for himself, he gained a firmer grasp of that narrative with a broader platform and doubled down.

Let’s get away from the election. It’s over. It was disheartening, divisive and an ugly cartoon. And it’s over.

So let’s move on to how the media, now that it’s not encumbered by the election, is now encumbered by DJ Trump’s Presidency.

Again, these narratives exist in the middle, always, and also contain the three character structure of Hero, Villain, and Victim.

On the left, this time, the Victims take the center stage because there are so many people legitimately effected by the rapid-fire executive actions of the last two weeks: Women seeking healthcare at NGO’s outside of the US, Muslims from 7 specific non-terroristic countries, Green card holders, members of the LGBTQ community, Native communities that don’t want their water poisoned, Californians who subsist on nothing but avocados, peaceful protestors, federally funded science programs, lower class individuals who can’t afford healthcare, and a hell of a lot more that I can’t remember because of the executive order blitzkrieg (the violent flurry of which might be a political strategy in and of itself– like a missile released with chaff to distract enemy fire).

The villains are obvious: Trump himself, Steve Bannon, Jeff Sessions, Betsy DeVos, Sean Spicer, it goes on. The Heroes come and go. Sometimes it’s Bernie, sometimes it’s Elizabeth Warren, but as of yet no solid figure has emerged.

In conservative media circles, it takes a little detective work to figure out the moving parts. The Hero is still Trump because he’s following through with his campaign promises. The Villain role has shifted directly to Muslims, immigrants, the companies and states that oppose the muslim ban, and leftist protestors. The Victim, this time, are harassed police and business owners.

That’s if the Victim is pointed out at all. Using the Victim role while in a seat of power is generally unwise. But there’s usually an implicit Victim and it took me forever to figure it out because it’s also a misdirection. Check out this Tweet:

screen-shot-2017-01-30-at-9-13-11-am

In a discussion that didn’t include Veterans at all, this tweet focuses its empathy towards that demographic because without a Victim, the story isn’t complete. Sometimes you have to force it. Like when Kellyanne Conway invents a massacre to justify the traveling ban. Or #Pizzagate. Or like this:

bloodtwitter

Dick Spencer is carefully assuming the role in an to attempt to make his white supremacist movement appear sympathetic and oppressed– going so far as Alt-Righters (otherwise known as nazis) goad liberals into punching them at protests. They want that video to go viral because it confirms their notion that liberals are a hypocritically violent and ironically intolerant. In other words, it villainizes liberals.

I figure it’s important to practice deconstructing media narratives now, because not only is there a good chance that the White House press corps will be primarily Breitbart affiliates within a couple of months, but also if you want to have a perspective changing dialogue, it’s key to identify which characters are in their narrative. If you can’t understand their ideology, you can at least understand their story.

So when informing yourself on current events, regardless of your political views, ask yourself the following questions:

Who’s the Hero, Villain, and (most importantly) Victim? 

Why are they portrayed this way?

Where does this story fall into the broader narrative being told?

Good luck out there.

Genre vs Fiction: FIGHT!

Genre vs Fiction: FIGHT!

There’s an interesting divide in the academic literary world based on the question of “what constitutes Literary Fiction?”

This rift has spread to the publishing world. The Literary Fiction camp holds the belief that Genre Fiction writers are cookie-cutter sellouts, pumping out as much trash as possible to earn a quick buck. Whereas the Genre Fiction camp views the Literary Writers as idealistic snobs, writing from an ivory tower and waxing poetic in ruffled shirts.

With some of the stubborn and pompous attitudes of literary authors and all of the garbage self-published on Amazon, it’s hard not to agree with both stereotypes. But I think if you want to get in the habit of writing successfully, you need to understand and aspire to both schools of thought.

Speaking of schools, here’s a story from my last class I ever took at University. It was a Renaissance Fiction Class, 400 hundred level. Through all of the reading, we were asked a simple question: is this Literature? No one, not even the teacher had a solid definition of what that meant. The vague answer is something like, “a written work that has literary merit,” which loops infuriatingly into itself.

The term began taking upon its popularity as its own thing around the time travelogues came into vogue–somewhere in the early 1500’s– and it’s easy to understand why, as a written, true account of a journey strikes on the “beginning, middle, and end” narrative structure naturally. These were supposedly non-fictional accounts, but there’s no doubt that details were embellished. The trend of intentionally fictionalizing these travelogues is traditionally credited to Sir Thomas More with his work UtopiaWe also read a bunch of martyrologies, another supposedly non-fictional account that has some, shall we say, mystical qualities to it (in addition to being objectively metal). With every chunk of reading we were asked if this was literature. 

More questions followed. Does literature have to be fiction? (A: “Not… really?”) Does it have to be interesting? (A: “Apparently not, because travelogues are boring as hell.”) Does it have to share significant insight into humanity ? (A: “Uhm, hmmm.“)

One answer was certain: that everything we read was written to the guidelines specific to a particular genre.

Another question: Was this considered Literature at the time? Nearly everything we hold in literary prestige garnered its accolades long after the author died. Shakespeare’s works didn’t get the literary treatment until the 20th century. Mary Shelley’s Frankenstein is a horror novel. The Great Gatsby was considered a failure until after World War II. Is it literature? (A: “Let’s sleep on it and figure it out next century.”)

Now, as far as it relates to the publishing world, a distinction between literary and genre fiction can be made. As far as I can tell, the difference is this:

Literary Fiction focuses on introspective character studies that attempts to reflect a philosophical truth of the modern age. The character dictates the plot.

Genre Fiction focuses on universally recognizable characters driven to make choices by external actions. The plot dictates the character.

Modern fiction necessitates an overlap– Don Delilo’s White Noise, for example, ends a meticulous and surreal study of a modern family with elements borrowed from a thriller. It’s in that overlap that you should aspire to. On one hand, learning and understanding the conventions of basic storytelling is important, because those elements don’t really change over time. Our brains are wired to understand stories and, ideally, you want the reader to understand and enjoy the act of actually reading your book. On the other hand, you should give a shit and try to make your work as affecting and relevant to the world around you as you possibly can.

Because at the end of the day, literature is like pornography. No one really knows what it is, but we know it when we see it.

Out of Frame

Out of Frame

I got into an argument with my favorite bartender recently about the genius of Baz Lurhmann. His argument was Moulin Rouge. My counterpoint was The Great Gatsby.

Ahem.

Baz Lurhmann’s The Great Gatsby is heartwarming tale of how a writer-director can take what is arguably the most American novel of all time and transform it to a staggering monument of cinematic piss. At 2 hours and 23 minutes, the film is a bloated psychedelic music video that is bookended by the frame narrative of Nick Carraway writing the book while being treated for alcoholism in a sanatorium.

While certain critics will defend this retcon as an innovative insight into F. Scot Fitzgerald’s life, which indeed spiraled into alcoholic chaos, it’s important to note that Fitzgerald wrote Gatsby in Europe, relatively comfortably, and that it was, in fact, Zelda Sayre who actually wrote a book while in psychiatric care.

You could say the use of the frame narrative device is similar to that of the cinematic adaptation of Naked Lunch, but that argument does not hold an ounce of water (or… gin?)–the film Naked Lunch is a schizophrenic journey of pornographic, junkie appetites that desperately needed narrative grounding, whereas The Great Gatsby is already a complete narrative, rendering any additional storytelling device unnecessary.

It’s kind of frustrating when you read Luhrmann thoughts on his own direction:

“What scenes are absolutely fundamental to the story? What scenes must be in our film? And what scenes can we do with out, even if we love them?”

Luhrmann isn’t really known for discipline in his movies. And that’s fine. He’s about spectacle and I can respect that. But when you pair the above quote with the one below, from the same interview, my mind explodes:

“in the novel, Fitzgerald very deftly alludes to the fact that Nick is writing a book about Jay Gatsby in the book […] – “Reading over what I have written so far…” So Craig and I were looking for a way that we could show, rather than just have disembodied voiceover throughout the whole film, show Nick actually dealing with the writing[…]”

That’s a wide reach to justify the choice of framing the narrative in the film. If I had to guess, Baz either wrote (or directed) his way into a corner, and the focus on Nick Carraway was his solution. I respect that writing involves a lot of creative solutions for the problems you give yourself and a frame narrative might actually be helpful. I’ll even say that the subdued shots of a novel being organized was some of my favorite imagery used in the film, because I’m a gigantic dweeb.

But I’ll take this explanation to task as the Nick storyline is the very definition of visual “telling” and not showing. It’s making literal what was initially implied. Many book adaptations mention that the narrator wrote the book that you hold in your hands– without having to wedge apocryphal material in to justify it.

And if you want an example of perfectly adapting a narrator into a film, look no further than Netflix’s A Series of Unfortunate Events. Which proves that it’s doable without coming off as heavy-handed.

But maybe I’m bitter because when I watched Gatsby in 2013, I left the theater angry because the novel I was working on at the time began with a frame narrative taking place in a psychiatric hospital. When I saw the trope play out on screen, it came off as cheap and melodramatic. I changed it immediately as soon as I got home.

Why did it come off as cheap? How did the device actually change the story?

In Gatsby, the frame device put a lot of weight on the story itself– that’s what a frame narrative does, looking back to a time with a different perspective– and shifts the perspective of the interior story away from a ruminative reflection on the foibles of greed, the emptiness of shallow relationships, the tedious culture of high society and the meaninglessness of achieving the American dream through ill-gotten gains to a brooding, traumatized perspective that undercuts the significance of anything else.

 

But in the end, The Great Gatsby (2013) got 48% on Rotten Tomatoes, which is a score a high school English teacher would give a book report presentation if it was just Baz Luhrmann nodding along silently as a Jay-Z album played through.

Then again, Robert Redford’s 1973 Gatsby vehicle got 39%. The one in 1949 has a score of 38%.

Maybe we should just leave Gatsby alone.

 

Digging Into Horror – A study in HP Lovecraft

Digging Into Horror – A study in HP Lovecraft

I have a few highlighted passages in Starry Speculative Corpse: Horror of Philosophy Vol. 2 by Eugene Thacker with my annotation, “aaaaaaaaah!” written next to it. Here is the first:

…something exists, even though that something may not be known by us (and is therefor “nothing” for us human beings)… (p. 41)

Shortly thereafter, I have this highlighted:

Darkness is the limit of the human to comprehend that which lies beyond the human… knowing of this unknowing… the conciliatory ability to comprehend the incomprehensibility of what remains outside… (p. 41)

Next to which I have annotated, “we only know so little about how we only know so little.” I then highlighted the following:

…there is nothing outside, and that this nothing-outside is absolutely inaccessible. This leads not to a conciliatory knowing of unknowing, which is really a knowing of something that cannot be known. Instead, it is a negative knowing of nothing to know. There is nothing, and it cannot be known. (p. 42)

I have annotated, “we don’t even know what we don’t know,” followed by “aaaaaaah!” again.

Cosmic horror is more or less predicated on these principles– that we are insignificant and blind to the order of the universe, allowing for the possibility to dream up monsters of the dark that are, by our nature, incomprehensible. The general conclusion of most stories that fall into this genre is that a character having been exposed to the unknowable will inevitably go insane.

All horror on some level follows this notion, whether intentionally or not– good horror allows our own minds to scare us instead of the monster on screen. Jaws famously buried its shots of the shark under several iterations of editing, John Carpenter’s The Thing never shows the true alien’s form (only the perversion of the host’s body it’s replicating), Jason Voorhees and Mike Meyer’s hide behind dehumanizing masks, and Sam Raimi’s Evil Deadzoomcam” follows from the perspective of the damned, but we only only see the evil manifested in the body of the possessed victim. The monster loses its potency once you see it in the light– once it’s realized, it can be killed.

So what sets the works of HP Lovecraft apart from the rest is how he’s able, in prose, to bury the horror so deep that it gradually creeps up on the reader. At first it seems like a magic trick. Until you see the cards.

The culmination of reading HP Lovecraft is unlike anything else I’ve read: for me, it was a joyful experience. I tried to pay attention to how Lovecraft crafts that lovely feeling.

First, he tells you the ending up front, usually in the first sentence of the story. From Dagon: “I am writing this under appreciable mental strain, since by tonight I shall be no more.” You know from the outset that the narrator is insane and will be dead soon, likely by suicide. It reminds me of the theory that spoilers only enhance the enjoyment of something, because you know what you’re looking forward to. It’s a clever device that answers a question and asks another– you know the ending, now don’t you want to find out how it got there? Eh? It also plants a seed of anxiety in the reader and puts them on edge– they know something’s going to happen, just not when.

You’re going to need that little push to get through a lot of his work, too, because HP apparently loved writing in arcane language. Most of his work came out in the 20s-30s, so it’s pretty dated by modern standards–and by the standards of the time. It’s dry and academic and I’m 90% certain that it’s written stiff on purpose. I kind of love this because its so antithetical to Lovecraft’s literary contemporaries– whereas Hemingway and EB White preached “brief and concise” to get the idea across effectively, Lovecraft prefers “vague and elevated” language to confuse the reader. Reading the geographic descriptions of a simple landscape often gets convoluted in its crags and valleys and deviations, such that the reader becomes lost. When describing “cyclopean” architecture and the horrific attributes of the ancient alien creatures, the high-brow, academic language remains indirect and it fails in its description. It’s supposed to, as what’s being described is unknowable.

A note about the academic tone worthy of mention is how seemingly tangential it is. At The Mountains of Madness, for example, Lovecraft spends a frustrating amount of time establishing a consensus on the best arctic drills to use during expeditions; The Whisperer In The Dark, along with The Call of Cthulhu, lingers on the “reasonable” explanations behind the strange inquiries at hand. The Dunwich Horror begins so raptly obsessed with the town’s history, that while one knows that something bad will eventually happen there, it strikes a chord ironic that anything out of the ordinary could happen when described in such a dry tone. I think this discourages a lot of readers from following through. I know it made me reticent. But after reading through a lot of these stories, I think it’s a brilliant, if not stubborn, move. You need to start at a place of reason and scientific certainty, only to let those ideals betray you later on. It’s a long grift, but one that works.

There’s also the fact that Lovecraft is inconsistent in the descriptions of his horrors. As I pointed out earlier, Lovecraft’s not trying to amass a rigidly defined mythology, but rather utilizing a loose one to tie his stories together. Monsters change shape from story to story, and the ambiguity of the descriptions only lends itself to how effective this is– although I don’t really have any evidence that this was done intentionally, I’m following the hunch that this is what makes HP’s work so damn haunting. Especially for those poor souls who have investigated the entire pantheon. Nyarlathotep shows up in a bunch of works, almost never fitting the same description twice, the Mi-Go are alternatively described as Yeti-like and crab-like fungoids… but my favorite is Yog-Sothoth, who generally goes unseen save for a benevolent lightning strike to banish some abomination back to the void. Admittedly, the following passage comes from a story I haven’t yet read, “The Horror at the Museum”:

Imagination called up the shocking form of fabulous Yog-Sothoth—only a congeries of iridescent globes, yet stupendous in its malign suggestiveness.

First pause to recognize how nondescript that is, and yet it conjured some image in your mind. Second recognize how he nods to your own imagination, in addition to the narrators, with the very first word, effectively robbing the narrator of certainty. Now let’s take a look at a passage describing, not Yog-Sothoth, but one of his human half-breeds, from the hilarious vantage of a hillbilly:

“Oh, oh, my Gawd, that haff face–that haff face on top of it… that face with the red eyes an’ crinkly albino hair, an’ no chin,’ like the Whateleys… It was a octopus, centipede, spider kind o’ thing, but they was a haff-shaped man’s face on top of it, an’ it looked like Wizard Whately’s, only it was yards an’ yards acrost….” — The Dunwich Horror

I find this passage particularly fantastic firstly because it contains a very uncommon break from the academic prose in favor of the native tongue of hill people– and even the layman can’t articulate precisely what the creature looks like, only approximating that it looks like an octopus, or centipede, or spider with a giant ugly face on it. Second, it’s incongruous with the description from Museum, even though we know by the final line of Dunwich, that “it looked… like the father.”

This kind of indirect, approximate horror can be found in the narrative structure itself. I mean, it has to be, right? If it’s in the language and “canon” then the story itself needs to mimic the same philosophy. HP does not disappoint. In The Dunwich Horror, the final spectacle is seen only from afar and those that watched it through a telescope were mentally injured:

Curtis, who had held the instrument, dropped it with a piercing shriek into the ankle-deep mud of the road. He reeled, and would have crumpled to the ground had not two or three others seized and steadied him. All he could do was moan half-inaudibly.

It becomes a game of telephone. It’s not that what Curtis saw was reported, but his reaction to the thing he saw, thrice removed from the reader. You attach to Curtis’s reaction, but you still want to know what he saw.

Even better is how the Whisperer In Darkness plays out, beginning with the “ending up front,” motif:

Bear in mind closely that I did not see any actual visual horror at the end.

And neither does the reader. It’s all suggested, all unknowable. The story continues in the now obligatory academic skepticism of strange supernatural happenings, when the narrator makes a pen-pal out of a true believer who seeks an academic understanding of the Mi-Go. The horror happens “off-stage” to that character, writing an epistolary arch of curiosity, fear and finally acceptance and friendship with the alien race. When the narrator visits him, he understands something is off, but only sees traces of the Crab-like fungoids, never the things themselves. When he speaks to a human being’s brain in a jar, that too is met with skepticism, with a narrative eye looking for clever deceits, but it’s never answered one way or the other as to whether a person or a recording provided the dialogue. Even when he’s speaking directly to one of the fungoid creatures, it’s a ruse born of either crafty mask work or expert taxidermy. He leaves it as a question as to what.

After everything (and often at the beginning), Lovecraft will give the opportunity to jettison the narrative from the reader’s mind, and suppose that the narrators really are insane. It’s a red pill, blue pill binary. Red pill, and it’s a fall towards an investigative rabbit hole as the rules of biochemistry and physics begin to deteriorate, before culminating into, possibly, a fervent spiritual awakening subservient (or antagonistic) to higher gods.

Blue pill, it’s a sick fantasy from a sick mind. Which is how Lovecraft wants you to swallow it. The cognitive dissonance between trusting one’s own interpretation over the rational accounts of those who have encountered unspeakable, unknowable horrors, is perhaps the juiciest turn of all. It forces the reader to linger in that space of nothingness and unknowable-ness long after the book is put back on the shelf.

If you like horror blended with political satire try reading The Least of 99 Evils available here.

Star Wars – The Art of Derivation

Star Wars – The Art of Derivation

I watched Rogue One in theaters with my family on Christmas day. I walked away from the experience pretty satisfied, albeit disturbed by the creepy CGI characters. Also, I would’ve been completely hammered if I had made a drinking game out of how many times the word “Hope” is uttered.

Overall it was a fine time. I enjoyed it more than The Force Awakens which is, by all accounts, a perfectly OK film. I think I know why.

All art is derivative. Our best films make no apologies about it (*cough*Tarantino*cough*GuyRitchieRiffingOffTarantino*cough*). Star Wars is notable for ripping the bones straight out of Flash Gordon— and in fact, the entire universe was built around George Lucas not being able to acquire the rights to make that film. What’s more, is the influence from Akira Kurosawa– if Flash Gordon was the bones, The Hidden Fortress provided the meat, fleshing out the style and action sequences of A New Hope. (Lucas also snaked Kurosawa’s infamous side-wipe technique with great effect).

So when The Force Awakens came to theaters, there was one major criticism that pointed out a flaw that couldn’t be ignored. (Hint: it’s the second biggest criticism of Return of The Jedi) The major gripe was that it was essentially A New Hope’s skeleton wearing a Millenial-friendly skin. It makes perfect sense that the screenwriters would do this, to pass the Star Wars brand along from the beloved Original Trilogy to the scrappy newcomers, but after replicating A New Hope beat for beat it still had to introduce a whole new cast of characters creating way too many plot points to give each a decent amount of screen time. As a result, the actual plot of the movie feels almost inconsequential, given that the movie doesn’t even end when the Dea—er, Starkiller Base explodes.

Which isn’t to say that it’s a bad movie. But when the derivative content comes from the same series, it becomes self-referential and when the self references become the primary leg the film stands on, it’s easy for it to teeter towards a redundant, unrewarding viewing experience. (To use a musical corollary, the best hip hop samples outside of its genre, even its own medium).

Narratively, this also cheats the script out of valuable time to accommodate the threads of the story. For all of the various problems that plagued Episode One: The Phantom Menace (shitty kid, poor direction, JJ Binks) perhaps the biggest sin was trying to telegraph too much story in the allotted time of a standard movie. I’ve linked to a lot of videos in this post, but if you watch only one video, make it this one, which shows George Lucas and his team’s reaction to the first screening of Menace. Before he starts to justify it, he looks truly remorseful for shoving too much at once, the same way how I was remorseful last night, shoving both pizza and buffalo wings in my mouth at the same time. (You thought I was going to make a sexual joke right there. Shame on you.) Lucas’s film editor has the best feedback: juggling four scenes at once convolutes the story. Whereas all three films in the Orig’ Trig’ only had to juggle three. (Eg: Empire is cleanly split between Luke’s training, Han and Leia’s shiznoz, and Empire business before it all comes together.)

While Rogue One clearly had references to the other movies, most of these were background easter eggs for nerds to gush about online. (I had a moment myself when I saw a probe droid flutter in the background) Because that’s the Star Wars brand. But the wisest decision this film made is that it sought to derive it’s content from other sources. First, the vibe is more Raiders of The Lost Ark in its first act, with the Arabic architecture, crowded streets, and obligatory show downs. Second, and most notably, Rogue One takes not only a page but an entire iconic character out of Japanese cinema and drops him in the universe. I’m referring to Zatoichi, The Blind Swordsman. Zatoichi is basically Japan’s Bond franchise, featured in 26 films between 1962 to 1989, a television series, and a Beat Takeshi revival.

By going back to the Samurai influence, Rogue One succeeded in creating a standout character that the audience could attach to easily, as his predecessor had cleared the way for immediate familiarity– Chirrut Imwe, a blind warrior connected to the force, but not quite a Jedi, and probably your favorite character of the film.

It might seem exploitative to take a character that’s essentially been screen tested over seas for years but after exporting Transformers and Marvel blockbusters overseas for the last two decades, to the point that their studios are beginning to mimic our brainless cash cows, it’s nice to see tried and true foreign influence in American cinema again.

Read and watch broadly, folks. Fold variegated influences into your work and resist the urge to hit the same beat for every song, movie or story.

True Crime: An American Love Story with Real Life Noir

True Crime: An American Love Story with Real Life Noir

We live in an age of an unprecedented fascination with true crime. While I’m not obsessed, per se, I myself hold an interest in the macabre, listen to The Last Podcast on the Left religiously and regularly weird people out with my burgeoning encyclopedic knowledge of serial killers. It’s healthy. And hey, My Favorite Murder found a surprisingly large audience and ranks #22 in top podcasts as of this article’s posting. Serial still dominates the top 10 in most charts, and its good season came out over two years ago. So why the sudden wave of True Crime Entertainment? Is it that the proliferation of podcasts in the last 10 years have offered a medium to accommodate previously verboten, niche subjects? Is it because the subject has been embraced specifically by alternative comedians, making the content more easily digestible? (Comedy is 75% horror, remember?)

Yeah, probably. But that doesn’t account for the years of CSI episodes based on real crimes, or Forensic Files, or etcetera.

So maybe I misspoke earlier. I think there is a precedent.

Millenials are a generation who grew up with the OJ Simpson trial and Columbine on TV. That was the media circus that crept into our minds at an early age, when we were just trying to scam candy dollars off our parents and play Super Smash Brothers. (You could also make the case that the OJ fracas revitalized and cemented interest in The Legal Thriller, but never mind that now). How could we not be curious about this stuff when we grew up, when we were raised in an exploitive media environment that leads with whatever’s bleeding?

That’s a piece of the puzzle, but news media has been exploitative since the invention of ink. Sensationalism surrounding serial killers was already a thing, so what happened in the late 80s that reinvigorated the interest?  Other than a slew of scary murders? I guess I should say, what came out in the 80s that made murder marketable? I look at the fact that James Ellroy released the novel The Black Dahlia in 1987, a fictionalized account of the unsolved, brutal murder of Elizabeth Short in LA, 1947.

I’ve got a lot to say abut Ellroy’s LA Quartet (it’s great), but for now I just want to mention that this was the book that elevated Ellroy from mere genre writer to literary status, and along with his ascent, he brought neo-noir back from the dead. You thank James Ellroy for The Coen Brother’s 90’s films right the hell now. He also put Elizabeth Short in the back of everyone’s brains again, with all of the gory details, priming us for a decade of sticky trials and investigations.

So let’s go back to the actual murder of Elizabeth Short AKA The Black Dahlia. The papers sensationalized the living hell out of the bizarre murder and while it’s somewhat understandable as to why anyone would latch onto this (A bisected body? A victim with a sketchy, mysterious past? Infinite room for speculation? The story writes itself!), the papers are at least partially to blame for the unresolved status of the murder. They went so far as to basically torment Short’s mother for information (having placed a phone call saying that Short had won a beauty contest. Can you imagine?), flying Short’s mother out on the ruse to cooperate with the LAPD and then keeping her away from authorities.

But the real mind job is why the papers called her The Black Dahlia. Okay, so they called it The Werewolf Murder first. But then they got their shit together and called her The Black Dahlia, because Werewolves are gooooofy. One explanation is that she was wearing a fairly skanky black dress at the time of her death. (A sheer blouse? Heavens.) So she was wearing black when she was killed and was known to generally wear black, lacy clothing and some drug store clerks with whom she was friendly claimed to have coined the handle. I find that a little suspect, but no matter how the name came about, it is absolutely a reference noir flick that came out the year before Short’s murder in 1946. A little number called The Blue Dahlia.

It’s an interesting movie. It’s got a tone of misogyny to it and a character keeps on referring to Jazz as “monkey music,” but those things aside, it’s fairly enjoyable. It’s about a Navy Officer fresh from the South Pacific who returns home to his unfaithful lush of a wife. He jets when he finds out she got into a drunk driving accident, killing their son. She winds up dead (duh-doyee) and our guy lams it, trying to find the real killer. There’s some sharp dialogue, some good shots and some clever twists on archetypal characters including a “Lenny”-esque character with a plate in his head (the sound design of his auditory hallucinations might’ve been groundbreaking at the time. I was impressed), a schmoozy club owner with (pathetic) ties to the mob, and a slimy blackmailing detective. The narrative keeps coming back to a nightclub, The Blue Dahlia.

As far as the similarities to Liz Short, there are only a few. The silver screen murder is bloodless (I laughed when the maid finds the body and says, “Oh, brother.”) compared to the ghoulish Black Dahlia case. I think what people attached with was the wife’s loose sexuality and Short, a Hollywood actress hopeful, was known to run around LA with various men in nightclubs. At least, as far as I can figure out. The kind of sites that offer information about her case aren’t–ahem– the most reliable.

Anyway, guess who wrote the screenplay for The Blue Dahlia? That’s right, it’s Pierre’s old favorite crime fiction author, Raymond Chandler. His bastardly behavior production of this film is legendary and it’s the only produced script that he handled solo (finishing the novel completely waaaaasted for days, maybe weeks). It came out the same year as the film The Big Sleep, based off of Chandler’s novel, published seven years earlier (He didn’t work on that screen play. Faulkner did. Probably wasted.).

1944 – 1954: Hardboiled fiction is hot and Hollywood cashes in, ushering in a brief period of Film Noir, influencing media in the most profound visual and tonal movement of the 20th Century.

So there’s this strange interplay of life imitating art with The Black Dahlia. Reality had, through tragic circumstances, provided a story just as lurid as a crime novel, more graphic than a film (thanks, Hays Code) and cheaper to produce than either. So we treated The Black Dahlia murder as entertainment.

And you know what? People bought it. Of course they did.

The fascination with didn’t start with Betty Short (The Lipstick Murderer, anyone? H.H. Holmes–soon to be the subject of a movie starring Leo Dio?), but this was the possibly the widest spread reaction to a singular crime to date (barring Presidential assassinations). It could have been the severity of the violence, or the focus on the victim herself instead of the murderer (which might not’ve panned out historically if this was a solved case), or the myth like quality surrounding it, but any way you cut it, I tend to think that America read the tragedy almost allegorically to the films they were watching and the books they were reading, and not the other way around.

Which is possibly more disturbing than anything else, really.

Villain For A Day

Villain For A Day

Spoilers for Blade Runner, Westworld, Silence of the Lambs, Ace Ventura, The Dark Knight, and so much more. Basically, don’t watch anything. Or just don’t read this blog post.

I’ve got a theory about the purpose of fictional media and how it relates to the social consciousness of the human species as a whole. First, you could say that it is our social consciousness. Hollywood is the dream machine, and our culture provides the content of those dreams. But the way that we address and view antagonists is particularly interesting to me.

Godzilla (or Go-jira, if you prefer) is the filmic representation of Japan grappling with the horrors of having two cities decimated by Atomic power. It’s a coping strategy. By making the tragedy into a literal monster, the concept was easier for Japanese citizens to digest and then move on. Others have drawn the parallels between 9/11 and Hollywood’s fascination with destruction porn.

Hollywood’s bad guys generally represent what we’re afraid of. Blade Runner comes to mind because it gives us a villain who is so sympathetic and genuine in his fear of death that a sense of humanity is given to him; whereas Deckert’s humanity is questioned. Fast forward 34 years later to 2016, an age that is increasingly concerned about the potential dangers of AI and you get Westworld, a series that portrays “Hosts” with artificial consciousness as the protagonists and self-absorbed, slave-tasking humans as the antagonists. (Kind of). The question remains the same in both stories– How can you deny a being who is conscious the right to be alive– but the values have shifted from sympathetic villain to sympathetic heroes.

Another progression: Silence of the Lambs came out in 1991, Ace Ventura: Pet Detective 1994. The bad guys are a crossdresser (kind of) and a transitioning woman. A lot has changed since then in attitudes towards the LGBTQ community. Now, while I don’t want to defend the portrayals in those movies (which would be easier for Lambs, as Buffalo Bill was based, in part, on Ed Gein and possibly Jeffrey Dahmer), it would be naive to think that Hollywood would’ve nailed those portrayal right out of the gate, because, if you believe our culture creates the media we ingest, at the time, this was (and still is in many parts of the country) a scary, outsider element that we didn’t understand. However, for all of the damage that negative portrayals of certain demographics can incur, there might be a silver lining– in seeing through film that transexuality, at the end of the day, is harmless, audiences can drop their fearful attitudes and embrace more progressive ones.

If you take a look at Star Wars: The Empire Strikes Back‘s famous twist (“No, I am your father.”) and sync it up to what was going on in American Divorce Law (1969, California passes no fault divorce, other states to follow in the ensuing decades, changing the structure of what a family looks like). In A New Hope, Luke is a twice-orphaned farm boy who goes up against an iconic evil (Vader). In Empire, we learn that Vader is Luke’s father and the space opera pretty much becomes a melodramatic family soap about the Skywalkers (with laser swords! fwoosh!) after these two near perfect movies. The reason, I think, that the series moved in this direction is because of the de-nuclearization of American families and Lucas and Company striking the vein of familial anxiety, attaching the uncertainty of fatherhood to the biggest badass in the galaxy. Lucas would argue that he had planned it this way all along. Lucas is a bit of a fibber. Vader wasn’t written in as a father character until the rewrites of Empire. By the end of Jedi, Darth Vader has redeemed himself, trading his own life to protect the life of his son’s and restoring a sense of paternal love to the Skywalker’s broken family. Likewise, divorce rates began falling in 1990, 7 years after the film’s release, enough time to digest the redemption message. Or I’m just stretching this. Moving on.

The other major favorite villain in the American pop culture zeitgeist: The Joker. He embodies chaos and in Nolan’s trilogy, playful nihilism. We fear him because he’s unpredictable, and his mind remains a black box, but his actions are at once calculated and random. The Dark Knight came out in 2008, and while a particularly successful politician ran on the platform of HOPE, the ensuing years embraced a darker paradigm, a reinvigorated apathy that put the early 1990’s to shame. 2016 seemed to personify this chaos and a sardonic sense of nihilism became our strategic coping mechanism as our news feeds filled with a relentless stories of death, violence and viral politics.

It becomes a chicken-egg problem as to whether our attitudes are shaped by media, or our media is shaped by our attitudes– but the general point I’m trying to get at is this: what’s scary now, will be the norm in a decade or two. So it merits some thought as to who/what we’re putting into the villain seat. I could also be waaay off base.

Bonus Lightning Round:

Jason Voorhees embodies sexual anxiety during a period of an HIV epidemic. Sexual attitudes relax concurrent with improved sex education. Jason’s relevancy in pop culture plummets. (This can be extended to nearly all slasher movie monsters)

The Terminator is the unflinching march of technology. As I linked to above, we live in a time in which Bill Gates is scared shitless of AI. So as to not be redundant, a different approach to read The Terminator is the shallow aspect of his humanity. His skin is just a thin veneer which he casts aside casually, without pain. This might be a stretch, but part of where our tech march has landed us is in a superficial sphere of human interaction via social media where your (genuine, presumably) human interactions are stored digitally, reduced to cold data to be mined monetarily later.

Voldemort is the embodiment of the fear of death (similar to Vader), a perennial fear that doesn’t have to be pinned down to any particular time in history. It also accompanies wizard racism. I think this is less about how hatred is going to be normalized, but it does speak to a sense of what’s going on in western Europe and America, where fear (in our case, of death by terrorism) is intrinsically linked to outsider hatred (personified as Islamophobia).

Current state of Super Hero movies: Internal fighting, villainizing your teammates (Batman v Superman, Captain America: Civil War, Daredevil vs The Punisher, etcetera) concurrent with the lead up to a divisive election cycle. It’ll be interesting where we go from there.

Happy New Year.

Root Cause

Root Cause

Some folks say that the hardest part about writing is starting. It’s difficult, to be sure, but I reckon the harder part is continuing. So let’s get both of those ducks in a row and discuss the importance of motivation, the lucrative subject that really doesn’t need to be monetized nearly as much as it is.

I guess we’d have to start with the age old question, “What compels you to write?” There are a lot of answers to this ranging from the dismissive (“Because I have the sickness.”) to the delusional and grandiose (“Because I’m rad at it.”), but nearly all of the answers fall into either internal or external motivations.

External

In a lot of ways, writing was easier in an academic setting because you had teachers giving you deadlines and feedback. There are rigid rules– I need to write a short story, because I’ll fail if I don’t. Or, I need to edit this short story, because my teacher will make fun of me in front of my entire class if I don’t. I had a good writing professor. Those are external motivations, but they are contained in an academic setting. There’s no assignments in life (unless you give them to yourself) and no grade (except Amazon reviews). Yet there are still external sources of motivation to write.

There’s a lot to be said about the support of friends and family. These people love you and want you to be happy. I hope. Accept their support. Ask them if they want to read something you wrote. Make it perfectly clear that they don’t have to. Also make it clear to yourself that they support what you’re doing and they’re going to have your best interests at heart whether they read the piece or not. So then you might need to question why you want their approval.

Perhaps you have the need for attention. I’m going to go ahead and say it’s OK to be driven by self-validation. It’s OK to use your talents to impress people. Some people will disagree, but those people aren’t funny. Perhaps you want to connect with your readership, in part because you find it hard to communicate your ideas any other way. That’s also fine. Maybe you’ve got this grandiose vision of Truth and this book is your way of clearing the wool from all these damn sheeple’s eyes, and you want people to recognize your genius 100 years from now in the annals of literary history. That’s… yeah, whatever, that’s cool, too. You might be kind of a pretentious asshat, but hey, I’ve been one myself a few times.

But there’s a limit to external sources. Because inevitably, you will fail somehow. You will get a bad review on Amazon. You will write a story your partner thinks is stupid, because it’s really stupid. A friend or family member will tell you to focus on a real job with health benefits. That can all be crushing. But were you writing for them?

The other problem with external motivation is that its currency is usually imaginary at the beginning. Sometimes it’s a helpful fantasy to keep you going. The rest of the time you might find that there are easier ways to validate yourself– like Twitter, or Yoga I guess. Point is, you might find yourself entertaining the fantasy of success instead of making moves towards it. I know this, because I’m not what you’d call a “successful” writer and have “entertained” a lot of “fancies.” And I know that kind motivation has an expiration date if you want to complete your projects, because it’s easier to dream than to write.

Derek Sivers strikes the notion that by stating your goal out loud, you are less likely to follow through with the goal, because you’re brain interprets the fact that you said it out loud with actually doing it. Now, if you’ve already spilled the guts to your novel, there’s a good chance that you did it for social recognition. Again, that’s fine. We need social recognition to maintain sanity. But you could’ve also shot yourself in the foot if you haven’t gotten that idea down on paper. I’m guilty of this too– a lot of would-be projects are lost to the wires of long distance phone calls and late night hooliganism when a friend inevitably asks, “What are you working on?” And I bank on the immediate gratification of having formed a good idea and I feel like a good boy and get a pat on the head for being super smart.

Internal

But to maintain a consistent work ethic, you need to dive into the well of internal motivation. The primary example that I keep going back to is a pride in quality– that if a work sits unread in a vacuum, I can still enjoy it for what it is. Not perfectionism, not necessarily feeling like it is even good, but a sense of satisfaction that follows labor, concentration, and thought.

Another: the love of reading– a reminder that I’m participating in a creative capacity that involves my favorite, and perhaps, most meaningful activity. It might be trite to repeat that good writers read, but, there you go. When I delve into works of fictive masterpieces I try connect to the author writing it and remember that, often, they think that it’s utter garbage. They aren’t attempting a grasp for fame or acknowledgement, but that they are simply trying and their efforts spilled brilliant minds onto pages.

Then there’s the practical approach: the old, “ass-in-chair-time” as it was once described to me. Making time for this can be hard but whether or not the motivation is there, there’s work to be done. The mind resists aggressive creation when it would rather be passively ingesting digital gossip from Facebook. I’ve been progressing towards setting an amount of time apart in the day to write, instead of word count quotas, à la Chuck Palahniuk’s egg timer method. The quota is still there if I get distracted, but by and large, human eyes detest empty spaces.

More often than not, forcing oneself to work leads to a genuine joy of writing. It creates a mental space that’s separate from the rest of the world, a workshop wherein one can place intense focus into solving logistical problems and turn a clever phrase, a place where the rules of Hell are reversed: agonizing labor becomes pleasurable and a certain sense of freedom is regained. In a world where meaning is constantly being questioned, it feels liberating to be able to create content that speaks Truth to the author as well as, one hopes, a readership.

Capitalzing On Your Joe Job

Capitalzing On Your Joe Job

Every one knows that you’re not in it for the money. The money might come, but it’ll come later, years later, after you’ve amassed a small library of classics. Or it might come biweekly if you work in media or journalism.

For everyone else, there’s the Joe Job, the daily necessity of labor and exertion to fuel your creative career. For a lot of us who were trained academically to write, this is also a necessity to improve value of our work.

When I was taking fiction courses in college, I mostly spoke with people in my own age bracket. After college, I was unemployed for quite a while. And you know what? I didn’t get much writing done. There was very little stimuli, outside of media. I wrote one piece that I’m horrified to revisit. It’s flat. It works as a cerebral exercise and only that, as there are few things in the story that resemble real life interactions or motivations. Once I was brought into the fold of the workin’ Joe, I couldn’t stop writing. I figured I could use my daily experiences to aid my creative process. It worked. Because, hell, you have to work a job, right? That much in life is certain until Robots replace us all. So until then, you may as well utilize your 9-5 the best you can.

One aspect of trying to create a career out of fiction writing that not a lot of people consider is what kind of job you need to have to make it work. I’ve watched a few of my fellow creative minded friends walk into a demanding (sometimes satisfying) career and hang up their paint brushes. Now, this could be irrational, but I admit that I’m afraid to lose that freedom, so I stick to employment that allows my writing life to exist– staying out of offices and school rooms and maintaining flexible schedules (At least that’s how I frame it. You could also say, poor job market, Millenial work ethic, yada yada. Clam it). These jobs might not pay as well as I’d like, but there are lessons inherent in any professional capacity. Let’s take a look at some that I’ve learned:

When I was employed before college, I wasn’t looking for anything I could use. I wasn’t writing then. NEXT.

My first job out of college was as a barista. It was a seasonal job and I was terminated after three months. I didn’t get much out of it writing wise, other than a sense of schedule. I slowly became more consistent with the time I set aside to write because, well, I had to. Simple lesson, but an important one. Now that I wasn’t writing for a publication or for classes, I had to motivate myself to get things done. During my period of unemployment, however, I didn’t value my time as effectively as I did while holding a position somewhere. Once scarcity was established, I began to value my personal time exponentially– and began understanding how to use it effectively to start and complete writing projects.

The next job I got was at a home improvement warehouse. I liked it. It required a lot of physical labor, a few tasks that required quiet concentration and a lot of talking to people. And people get chatty at those stores. It helped me connect with blue collar Americans. My co-workers and customers fed my imagination and gave me grounded details of their rural lives. It was stuff that I could take back to my desk and fold into scenes, enriching the sense of realism. One of my supervisors found out that I was a writer and joked that I was going to make him a villain in one of my books. And then I did, as I could perfectly account for how he’d react in any given situation. That job also helped my dialogue immensely. More on that in a bit.

Third job? Makin’ sandwiches. Everyone should hold a job wherein they don’t give a single, solitary doo-doo about at least once in their lives. And then they should quit. I’m not sure I learned a writing lesson at this one, but I did learn how far I could push my writing schedule while phoning in a work performance. At the frenzied height of one of my novel revisions, the daily schedule looked like this:

3 PM – 9 PM: Make sandwiches, go home

9 PM -4 AM: Write

4 AM – 8 AM: Lucid dream about writing

8 AM – 10 AM: Write down the passages I wrote while asleep

10 AM – 2 PM: Nap

And repeat for nearly two weeks. Then I got sick and had to tone it down a little. Maybe I learned a lesson about my own boundaries and limits. Maybe not. NEXT.

I did tech support for Apple products at a call center. Not only did I get an education in pacifying aggravated customers, I got the opportunity to chat with every geographic region in the USA. It not only gave me a crash course in regional dialect, but also how different communication methods provide insight into how people think. I came into that job with the bias that New Yorkers were a pissed off, curmudgeonly people and that Southerners were a simple folk. I was delightfully proven wrong. New Yorkers speak fast. They live in a fast-paced world, even when they aren’t in a hurry. There’s no reason to take offense to that. They’re also probably the most generous people I spoke with– I’ve been invited to dinner no less than five times by New Yorkers and Jerseryans, every time in an aggressively friendly manner. Meanwhile I learned that while Southerners speak at a slow pace, they’re not slow witted. I held that bias longer than I’d like to admit. Then I had a call where I was walking someone step by step through a reboot process, and usually by step 3 or 4, I let the customer take it from there. He didn’t. I asked him if he knew the next steps in the process and he said that he did but was waiting until I told him to do so. It wasn’t that he, or Southerners generally, was a dum-dum, he just respected my authority on iPhones. How this relates to writing (you may have wondered 200 words ago) is the dynamic of effective dialogue. It’s my opinion that dialogue, in addition to any narrative information conveyed, should reflect an attitude. In that respect, this job was a goldmine. I figured if how a person asked for help with iMessage could sketch a small portrait, then every tiny line out of a character’s mouth should be another brush stroke of a mural.

I’m going to skip all the other jobs, the gigs, crawling back to previous employers, and other months of unemployment and go straight to my currently held position:

It’s pretty great. It’s physical, so I don’t resent sitting at a computer for hours at a time at night, provides enough critical problem solving so I don’t go completely insane, and since it requires following procedures, I have the opportunity to day dream and mentally review what I’m working on creatively, fix logical issues, revisit character relationships and figure out the next step, all while performing my daily paid duties. Or, I listen to podcasts (some about writing, some not) to keep myself up to date in the goings on in the world without that biting into time after work so that I’m prepared to dig into my projects in a creative mindset when I get home.

The point is, you shouldn’t despair at your job. There are opportunities to expand your creative life everywhere. Keep your mental note pad open and figure out a way to keep writing, even when you’re not writing.

 

 

Tuning to Harmony

Tuning to Harmony

I remember that the two dirtiest words in an English course discussion were “author’s intent.”

Summarily, the discussion basically the cuts the same way every time: one side says that author’s intent is negligible, creators aren’t always cognizant of the significance of what they’re creating and the other says that we must respect the genius inherent to the craft, every little thing is in its proper place and there for a reason.

Good rule of thumb is to be a middling son of a gun. Writer’s aren’t gods, but the good ones ain’t slackers either. (Except for me. I wear my hat backwards and am late to stuff).

Anyways, this discussion generally leads to another popular discussion: “Is symbolism intentional?”

Again, it depends. And I’ve found that the answer can be yes and no about any particular symbol.

In an episode of Radiolab, Paul Auster describes what he calls “rhyming events,” and he uses the real world example of a girl he dated in college that had a piano with a broken F key and later that year, on a trip to rural Maine, they encounter an old (abandoned?) Elk’s lodge with a piano… that had a broken F key.

Uncanny? Sure. Does it mean anything? I think Auster mentioned it because there’s a certain unworldly profundity to the circumstance that he doesn’t understand. And a theist could point to the hand of God underlining a certain meaning and an existentialist would write in their own meaning as to how it’s to be interpreted and a rationalist would say that it’s just the hazard of coincidence. And so forth.

I think this question is one that Murakami plays with often. In Hard-Boiled Wonderland and the End of the World there’s a little, non-assuming detail about the main character– that his most prized possession is his whiskey collection. That the narrator is a heavy whiskey drinker is featured prominently, but when he describes the bottles he values, he lists Old Crow and Wild Turkey (among others,) the former being generally low shelf, the latter being middle shelf. Did this mean anything? Does it speak to a sense of emptiness that the highest possession of value is some of the cheapest bourbon on the market? Or was this just a sign of 1980’s Japan, when the foreign whiskey market opened up, thus making Old Crow a hot item of the times? Does Murakami want me to be asking these kinds of questions?

I’ve also argued (in my head) about the recurring motif of lice in Salinger’s The Catcher in the Rye. [cue montage to every line using the word “lousy”] Does this speak of Caulfield’s paradigm? That the world is a louse-ridden, filthy place? Or is Salinger just tapping into the common verbiage of an angsty teen? Am I cheated out of anything if the second turns out to be true? Does it make it the first interpretation any less true? History has shown that it’s not the best idea to overthink Catcher in the Rye.

Another quick example: IS PAUL DEAD? Quick take: No, but The Beatles sure loved to keep the meanings of their songs ambiguous, and probably played into the hoax as it unravelled the minds of acid tripping college radio DJs.

Ahem.

For writers, it would seem that woven-in symbolism is optional because it might happen anyway. Disregard the question of intentionality entirely because, successful symbolism and underlying conceptual themes ask the reader questions, instead of attempting to define anything concrete.

That doesn’t mean you should stop trying to massage meaning into your own work. That means that you first have to keep it open.

Riffing of Auster’s terminology, I’ve noticed that there are resonating frequencies in my own work. In the first draft, it’s my job to create opportunities for these moments, these scenes, details, dialogue to resonate. Just like Auster’s example, I’m writing about circumstances that appear to have profundity, even if I can’t quite place what’s so profound. It might not be the author’s job to place it, either.

Going back over them in the second draft, it’s my job to see which frequencies work together and tweak them so that they harmonize, and cut everything that’s singing out of key. The idea is to normalize a certain sense of complex language that it’s barely noticeable– casual readers can enjoy themselves, and thoughtful readers can dig in to some juicy concepts.

But when in doubt, it’s best to stick to basic storytelling first. Don’t carry the burden of making the cleverest, densest and heavily layered piece of fiction in the world. It’s been done and it sucks.

It’s also helpful to remember that a cigar can just be a cigar.

(Bonus round: Did I include the Kanji symbol as the header because it has some sort of significance or because I thought it looked like a haughty bird person holding a basket?)