My (soon to be) First Full Musical

Dewey in the lab
Me in my “summer home” at the Lamont music-computer lab.

Hey Dewey, haven’t heard any of the music you’ve written in the past couple of months. What have you been up to?

Great question, convenient, rhetorical friend! Over a year ago, in July, I signed a contract with a Pennsylvanian writer named Ellen to help make the music for her musical titled, “The Pond: A Fairy Tale Gone Horribly Right, A Scientific Hypothesis Gone Horribly Wrong.” My role in this project has evolved over the last year from writing arrangements of existing songs, to writing new music to replace those songs.

So what have I been up? Something I’ve been training for my whole life: writing songs inspired from other music.

Back in high school…

I wrote a number of songs of questionable quality. It was a “Like this status” challenge that obligated me to write individual songs about over 40 people. To get through the high quantity of demand, I found songs I liked (that were arguably related to the person I was writing the song for), and I imitated them. My favorite example was “Hernan: The Man, The Myth, The Legend,” which was inspired by the song “Shout” by Tears for Fears. (Funnily enough, I submitted this song as part of my portfolio that successfully got me into the classical composition program at Carnegie Mellon…)

How does one imitate a song? Well, for this song, I just picked the things I loved about it. It had a looping idea that added a new layer with each phrase. The hook was a quasi-chant. And it had an instrumental bridge in the middle. So I wrote a song with those qualities.

Hernan The Man, The Myth, The Legend – The Bridge into the Final Chorus (I don’t add this to this blog lightly; nobody likes to dig up material from their high school days! But this excerpt is an example of what I’m talking about.)

Now, I’m doing this… professionally?

It’s weird that this thing I did passably in high school is now a professional gig defining an entire summer. And this summer is intense. Some background: we decided in early April that original music was the way to go for this musical. But that was in the middle of a two-term stretch that, at the University of Denver, forces me to tread water very hard to stay above water. So I looked to the summer as my opportunity to kick some life into this project. I saw the need to write 10 major songs, and I had (after a much needed break) 10 weeks to write them in.

How did I approach this fast rate of music output?

Well, I had a great framework to go off of. Ellen had already made the decision about where musical numbers should go, and what type of music fit there. All I had to do was imitate the songs she had already picked. Like I did in high school. But, this time, I’d say I had more tools to work with. And these songs also had to fit into a dramatic, musical context.

Trapped in a Mind with a Friend was my first large scale experience to take someone else’s dramatic vision and capture it musically. I decided to take what I learned there and apply it here. In this show, Ellen had written some animated characters who also had depth, a combination that’s tricky to pull off. It’s easy (especially in musicals) for characters to be caricatures more than people. And the music can swing the scale, wildly, between these two poles

Let’s Look at Rana Glosioso

A seductive frog who knows that she wants in the world. And what she wants is our protagonist, Prince. She’s not just a temptress to sway Prince from his journey’s goal, though. She’s been extremely thoughtful in her decisions that led her to be who we see: she became a frog and decided to continue being a frog; she created her own life around this decision; and she didn’t just flirt with any random frog, she saw Prince and saw why he was the one for her. (That’s all I can say without giving away any more spoilers).

So when she thinks she might not ever be with Prince, is her song grossly melodramatic? No, it’s pensive and heart-wrenching. Sondheim writes that sort of music extremely well. So I studied his score, “Not A Day Goes By,” from Merrily We Roll Along. With every. Tool. I. Have.

Without Getting Too Technical

I played the score on the piano, analyzed the chords, and analyzed the melody. One of the large-scale elements of the song is that he modulates from F-major to G-major, but then doesn’t even end the piece on a G-major chord. It’s something that, theoretically, makes little sense. But to our ears, it’s extremely expressive. Some of his chords don’t make sense in a traditional context and, therefore, are hard to make rules for. Needless to say, Sondheim is very unique. And looking at these elements, it was hard to come up with a way to imitate him without just copying him.

Still, like in high school, I set out with a few goals in mind: modulate to SOME key like Sondheim, use the types of chords as Sondheim (9th chords, suspended chords, maybe throw in an augmented chord), and add in some modal mixture (notes that don’t makes sense in the key, but make sense in a scale related to the key).

(The technical, musical mumbo jumbo is almost done, don’t worry)

Every idea came to me piece by piece. The melody should have long notes to let a countermelody shine through. And with that thought, now I’ve written 4 measures. Sondheim has these yearning triplets that define the song. I should add those, maybe incorporate some modal mixture there. There, 4 more measures I can return to throughout the song. Instrumental breaks show a breaking of emotion into the inexpressible. I should write one. 8 measures.

Well you get the idea (or maybe you don’t, so I guess you’ll have to see the musical when I’m done)

Back in June, as summer was starting, I knew that this project was intense: at least 10 original songs. And I knew that, when school started again, it would be impossible to find quality time I needed to get the product I wanted. The summer was my chance, and I would need to average about a song a week to get it done.

So what have I been doing the past couple month? A LOT of composing that amounts to at least one song a week at 3-4 hours a day, as well as keeping up with other projects (that you will hear about soon). Unfortunately, I won’t be able to share all of this music anytime soon until we get a production going. But keep an eye out for updates on “The Pond”. It’s going to be a fun show to get put on and watch.

– Dewey

Be sure to take a listen to my most recent release, Paranoia in an Illusion (the Trouble with Eyes) for wind ensemble. And add your email to my list to receive updates in the bar on the left!

The Patient Pawn Is Waiting

Who is The Patient Pawn?

The Patient Pawn is there. And you wonder if others notice him.

You think about him. How he volunteers to do the menial tasks. Complacently, quietly. The cog in the machine that never needs to be oiled. He copies, he transcribes, and he waits.

You wonder why he does it. Why he volunteers. Is he afraid of being taken for granted? Would he mind if he were? You wonder if he enjoys taking the unwanted scraps of the tasks, left by society. And, somehow, you’re afraid that he does.

The Patient Pawn is the one there when you hope someone will be. He’s soft in his demeanor. Approachable, and yet opaque because you don’t. You don’t know him. Your friends don’t know him. And you don’t know if he has friends. He smiles nonetheless.

What could be underneath?

What if The Patient Pawn were cruel? What if he were strange? What if he wasn’t what you wanted him to be. You realize… what if he were human? What if The Patient Pawn were driven by carnal desires of hunger and lust? What if The Patient Pawn didn’t smile for other people’s benefit but smiled as the result of some strange machinery, deep within his cerebellum. Sick thoughts. Or misplaced thoughts. Or decadent thoughts.

You don’t want to know The Patient Pawn. You want The Patient Pawn to be you. To be better than you. To be all the parts of you that you like and all the parts that aren’t human. You want to be The Patient Pawn. The Patient Pawn just wants to be.

And so he waits.

Faking My Way through the Electroacoustic World

FOR MY FRIENDS WHO ARE MORE INTERESTED IN THE TECHNICAL PART OF MY ELECTROACOUSTIC JOURNEY (OR IF YOU DIDN’T CLICK ON THIS POST TO HEAR ME RANT), CLICK HERE.

I admit it:

I’m not as good at electroacoustic music as I should be. I can program. I can write music. But I’ve never been good at putting them together.

To this date, I have done three projects that involve live electroacoustic work: Steel Symphony for Contemporary EnsembleInnocence, and The Party. Steel Symphony is the ultimate example of electroacoustic work. Daniel Curtis (danielnestacurtis.com) and I dreamed up 3 live effects that we could put on 10 instruments that would be able to multiply the perceived size of the ensemble, allowing us to orchestrate an 80 musician symphony down to 18 musicians. We dreamed up the effects, but tasked the multi-talented Alexander Panos to execute them. It was similar with Innocence: I dreamed the effects I wanted on the narrator’s voice, and Samir Gangwani (samirgangwani.com) brought them to life.

So, when I was told, recently, that my portfolio’s greatest weakness was the lack of involvement of electronics, I was not surprised. I could never make the programs I needed for electroacoustic work. So I didn’t.

The problem is (my excuses are…)

A lot of the music world (including Alexander and Samir) uses a program called Max, which is an amazingly powerful tool. It is a visual programming device that can modify sound immensely with only a couple of commands. You don’t have to know very much about digital-signal processing, WAV files, none of it. It’s beautiful.

So why haven’t I made my own patches? Why haven’t I used Max? Well, this is a good time to talk about The Party. The Party, in short, is a piece that uses the distribution of people in a room to generate random beats. For example, if there are more people at the bar, the beats are more “trippy;” if there are more people on the dance floor, the beats are more groovy… In theory, at least. But basically, the computer takes in info through the camera, processes it, uses that info to manipulate a sound library, and then sound comes out. Easy.

Could I have used Max to do this? Yes. Did I? No. Partly because this piece was a project for a programming class (15-112 #CMU), which required me to program in Python. But also, because of this class, and because I was raised by two engineers, I was trained to think like a computer scientist. So I used Python to make this program.

And now we come to why…

Max and I live in dissonance.

Before I go on, let me be clear that, if I had done an extensive lesson in Max and all of its facets, I might not run into these problems. I admit it, a lot of my complaints come out of my own laziness.

HOWEVER, especially in the last year, as I’ve tried to build some pieces of software in Max (they’re called “patches”, ok? Patches), I’ve run into some reoccurring issues. First, a couple of standard effects, like reverb, delay, and pitch bending are not readily available. There are patches from the examples and the internet that you can copy and paste to make them, but I feel a software specifically made for sound manipulation should have these effects as fundamental building blocks.

Second (and this is my REAL reason for not buying into Max), the program doesn’t follow standard computer logic.

Here’s an example:

I’m working on this piece that I want to have take in the sound of a viola and output a very dense cluster chord of pitches (microtonal, between notes). That is, if the violist plays an A-440 Hz, the program will take that pitch and output the viola’s sound at 439, 438, 437 Hz, etc. AS WELL AS 441, 442, 443 Hz, etc. Think of it as a densely packed ball of sound. To do that using Max, I used an online example that used Fast Fourier Transforms (a standard for pitch bending technique, referred to as FFT) to get the job done. It doesn’t require a big block of code to use, but it’s not a small one either.

Then, when I went to route through these transforms over and over, I ran into the logical dissonance Max and I have: a computer programmer would build a loop that processes the signal over and over and bump up the amount it bends the pitch each time. It would be quick and efficient (on the fingers, a workout for the computer). Max, however, needs you to copy out each time you want to bend that signal. And while there are ways to condense down the body of the code, you still need to literally route that signal each time you want to bend it.

For my non-programming readers, here’s the sum of it so far: what takes 15 copies of the same piece of code in Max only takes 3 lines of code in Python. And 3 is less than 15, so that means Python wins.

Now here’s where we learn why people don’t use Python to begin with: Python is an insanely versatile tool that has been used to make anything from games to lab research tools to dumb calculators. But because it’s not made for just electroacoustics, it’s much more raw in its handling of sound. But that doesn’t stop me from using it.

See what I do about it next…

Getting Started to Stop Faking My Way through the Electroacoustic World Using Python

Now we get to the real reason for these posts.

For those of you who opted to skip the rant that is the first post I made, welcome! For those of you who want some context before jumping into some deep Python, click here.

Almost a week ago, I was discussing with my father (no relation to “Father”, but relation to mcdewey.com) my tribulations in needing to enter the electroacoustic world. I told him how I am unable to get started because I don’t know what tools I’m working with. In an orchestra, you know you’re going to have woodwinds for all of those woodwind-y moments. But with electroacoustics? Well, you have whatever you’ve given yourself to work with.

He said to me that any programmer builds a library over their life of pieces of code that they use over and over. For example, a weather app maker will always need a piece of code to convert Celsius to Fahrenheit and back. So I need to build mine. And that process, he said, will be slow and frustrating.

So last Wednesday, I went through that process.

I’m hoping I can give pieces of advice through the telling of this story to help those of you who might be thinking of moving away from Max and need some help in the cold, desolate world of Python libraries. So here goes:

(1) First, I began by trying to use PyAudio raw

This was a mistake. PyAudio is very convenient for very basic sound processing. It will record audio. It will play audio. It will even playback your microphone live. But any manipulation on the data it handles will most likely be to the bits themselves. Trust me, this is like messing with the registry on your computer. You can do it, but you’ve really got to know what you’re doing in order to not break the thing you’re playing with.

(2) I cleaned up my Python environment

I had installed Python on my computer many months before this day. So when I went to install some new libraries, I saw that there were two install locations, and I had no idea what was going on. So I uninstalled everything. Which is not inherently a bad idea! But, here are the mistakes I made in reinstalling Python:

(a) I installed Python 3.7

As of today, August 27th, 2018, this isn’t a good idea. Python 3.7 has some glitches with PIP (the auto-libary-installer) that make it impossible to use. If you want to use Python libraries (you do), you want PIP.

(b) So, I installed Python 3.68 to the root of my C Drive

Which was not smart. Be sure to make a folder for your Python.

(c) And then, I installed the 64-bit Python

It is so dumb that this was a mistake. But this is Windows, and I’m about to explain why this was a mistake.

So, in the end (after many installs and uninstalls), I had a 32-bit Python 3.68 installed in its own folder (“Python36-32”), and that folder was in the root of the C Drive (C:/). When I installed, I told it to install PIP and add Python to my environment variables. I was ready to go again.

(As an aside, I already had my text editor, Sublime installed. If you have no experience with Python and don’t know what to use to write and execute your code, pick your code editor here. I won’t go into much more detail, but I believe that if you use Sublime, and you did everything above correctly, then all you have to do is save your file as .py, hit Ctrl+B on your file, and everything should run smoothly.)

(3) I played around with pysndfx

Short for Python Sound Effects. These guys have built a sleek library that does all the basic effects an electroacoustic musician could ask to start with. However, their live sound manipulation is still in beta. But, if this strikes anyone’s fancy, I invite you to take a look at it (check them out on their GitHub page).

(4) Finally, I discovered pyo

Let’s be clear, at this moment, I was only 1/3 of the way through my day. I will tell you right now, pyo is what I stuck with (spoiler alert). And while it is an INSANELY powerful Python library, it has its quirks. And they took me another third of my day to figure out. So here’s that process…

(5) Installing and setting up pyo

Real quick thing about pyo: this is a product of AJAX Sound Studio that essentially takes the inner guts of PyAudio and injects it with steroids to allow for the live electroacoustic manipulation I was looking for, all wrapped up in a neat package. This is basically what I was trying to build with PyAudio, but done in C (a much faster language) and already completed (so I don’t have to do it). Check it out on their site: ajaxsoundstudio.com/software/pyo/

So here’s what I went through to get it installed:

(a) This installs through a .exe

I know, I know. I just made a lot of noise about making sure PIP works for your Python. And at this moment, that doesn’t matter. But… still make sure your PIP works for your Python.

Anyway, you download the .exe and you run it simple.

(b) You have to make sure you’re working with 32-bit Python

For whatever reason, the default folder this saves to is always a 32-bit Python folder. And it doesn’t automatically aim for the right destination. So make sure you’re installing to your Python 36-32 folder then go to > Lib > site-packages and give it a folder in there. This is you doing PIP’s job for it. (This step is why I said installing 64-bit Python was a mistake before. 32-bit libraries should work in 64-bit worlds but… what can you do when they don’t? So you use 32-bit worlds)

(c) Do the damn introductory files

After fighting through a difficult setup and install process, the first thing I always want to do is hop into is playing with the software and getting results. But I learned so much faster by going through their example files. Here is what you will learn: (1) pyo sets up a “server” for your sound to run through. This server’s settings is what controls the settings of your sound and is the reason why pyo runs so quickly. (2) If this server isn’t interacting with your sound driver correctly, pyo won’t work at all. Here are two solutions I came up with:

(a) Specify the driver with winhost:

Ex. Server(winhost=”asio”).boot()

I never really figured out how the winhost names correlated with my driver, but if you go onto pyo’s audio setup documentation, you’ll see there’s not too many of them, so you can guess and check. “[“mme”, “directsound”, “asio”, “wasapi”, “wdm-ks”]” wasapi is the default. asio is the one that worked for me.

(b) If you read that documentation, you’ll notice there’s some stuff lower down about specifying sampling rate.

I prefer this, because my computer changes drivers depending on whether I’m playing through speakers or headphones. And this is WAY TOO MUCH risk for me (imagine, I’m testing through speakers as I build my program. Then I get to the performance and have to plug into speakers, and can’t figure out which driver to route to… literal nightmare). This is what I found worked for me:

Ex. Server(sr = 48000)

How do you know what your sampling rate is? Right click your lower-right sound icon > Open sound settings > Device properties > “Advanced” tab. You’ll see in the dropdown box the default format your driver uses. Mine says “24 bit, 48000 Hz (Studio Quality)”. So if I tell pyo to use that sample rate, no matter the output device, it knows what sample rate to use and, therefore (I think), which driver to use. As a result, my sounds play no matter where I want it to come out of.

(d) Once you get here, you can start playing

And playing was the last third of my day. The introductory files were still a huge help, but I had a lot of fun playing with different methods and developing the sound I wanted for a piece (all with the help of pyo’s decent documentation)!

And this is exactly what my dad was talking about. You need to build a world so you can play in it. Now, you might have noticed, I said I did this in a day. Which means the world I built to play in is PRETTY SIMPLE. But it makes me excited to do electroacoustics and gives me a path to open up my creativity into this world. I’m excited to show you what I make in it, and be sure to show me what you make too!

– Dewey

 

… And because I’m a nice guy, here’s the code I used that gives me simple playback from the microphone.