A computer simulation built in LabVIEW
I spent some time with LabVIEW and two books—Code by Charles Petzold and The Elements of Computing Systems (TECS)—in order to understand how a computer works at its most basic components. The result was a working simulation of a computer built in G (LabVIEW code). It's nowhere near feature complete, nor is it particularly efficient, but considering the purpose was solely didactic, it accomplished what it was set out to do. I tried to use a very limited set of LabVIEW's features, implementing every fundamental logic gate from only the NAND gate built into LabVIEW. Here I have presented some of the preliminary parts, the ALU, and the registers.
If you have LabVIEW and want to check out the code, feel free to email me and I'll distribute what I have.
Here's a quick video demo before I explain some things below:
Starting with the NAND gate, I created AND, NOT, and OR:
From there I created more compound logic gates and multibit versions of all of these. Then I set to work on an adder and a whole ALU.
The ALU was based on the specification outlined in TECS; it took two 16-bit inputs and 6 flags for zeroing or negating either input, negating the output, and a function flag specifying whether to Add or And the inputs. You can see here I "cheated" by using a for-loop to make using multiple bits easier and a cluster to handle the input/output flags. Other than that, every node you see on the diagram is either an input/output node, or created from scratch.
Here it is, miraculously adding two numbers.
On the registers, I cheat again, storing data in LabVIEW's feedback node (since virtual wires don't really have a charge).
SampSyn is an OSX real-time, granular synthesizer that determines the grain size of a user-specified input file based on the note selected. The back-end audio synthesis was built using the Synthesis ToolKit, a C++ audio API created by Perry R. Cook and Gary P. Scavone, and the front-end was developed using Cocoa, Apple’s native API for Mac development. It was originally created as my final project for Georg Essl's Digital Sound Synthesis class (PAT 462) in Winter semester, but I kept working on over the summer and into the fall. It was showcased as a submission to the AES student design contest, October 2012 in San Francisco. An early version of the code is on Github.
How SampSyn Works
SampSyn creates output by repeating a single part of a file. In general, the frequency of a sine wave is determined by how many full cycles the function goes through in a second, but SampSyn creates frequencies by repeating small chunks of a file. Instead of repeating a sine wave, though, SampSyn will play and repeat a section of a file, truncating it depending on the note the user plays. The file itself is not time stretched at all, rather the amount of the file that is played decreases so that as it is repeated it can cycle through more frequently per second. From here on out, I will use the term “frequency” to mean how many times SampSyn will repeat a portion of a file, which is dependent on the length of the repeated portion. SampSyn can be used with any MIDI device, but I will use a MIDI keyboard as an example for simplicity.
SampSyn works by mapping MIDI note values to lengths of an audio file. There are currently two algorithms to determine which MIDI values get mapped to which file divisions.
The first is the standard algorithm, which approaches the mapping by looking at MIDI values first, calculates how much of the file is to be played. The algorithm calculates a file division that determines how much of a file will be played. The file division is computed according to the MIDI event it receives from the user based on the following formula: where n is the MIDI note value, for n that spans the domain of all possible MIDI note values, 0 to 127 (C-2 to G8). SampSyn plays th of the file depending on n, the key value pressed. So, for example, MIDI note 0 is defined as playing the whole file, note 12 plays half of the file, note 24, a quarter and so on.
This option is set as default when the synthesizer opens. The drawback to this option is that the keys are mismatched to the equivalent pitch output of a regular instrument. The frequency of any depressed key is completely dependent on the length of the audio file that the user selected, so unless the user is lucky, depressing middle C on the keyboard will not produce a frequency that correlates to the actual pitch of middle C. As a matter of fact, this algorithm may not even produce frequencies that are at any correct pitch in standard tuning at all. In order for the user to be able to play with his or her friends, a second algorithm was created.
The second algorithm is the pitch correction algorithm. This algorithm defines the specific file divisions based on the frequency output of each key first, allowing the output to be fairly in tune based on equal temperament. Since many of the keys on a standard keyboard do not include many of the lower MIDI-valued keys that create a curious effect, an octave switcher is in place on the user interface so that the user can maneuver to all sonic zones that are possible with SampSyn. Other than the redefinition of the frequencies to align with pitch, this algorithm works in the same way as the standard algorithm; every octave SampSyn’s frequency doubles.
What SampSyn Sounds Like
At the low end of the keyboard, the synthesizer will audibly repeat the selected section of the file at a frequency defined by which key is currently depressed. It is aurally apparent what the contents and major features of the file are in this region. The difference that sequential keys make on the calculation of the file division is minimal, so drastic differences are only apparent when making large intervallic jumps. This area of the keyboard, while not producing output other than one would expect, is still very fun to play with.
The high end of the keyboard plays the selected portion at a very high frequency; so fast, in fact, that the contents of the file are hard to discern. Only timbres that last a moment in the file can be heard. Without the smoothing option turned on, the artifacts of the file being played so quickly create a sound akin to a saw tooth wave in this part of the keyboard. In this section the differences between key to key produce distinct notes much like any other instrument, so it is easy to make a melody.
But the most interesting feature of this synthesizer becomes apparent when the file division is set so that the frequency of the output just barely wanders into the territory of the lower end of the human hearing spectrum, when the amount of the file that is played allows it to be played starting at around 20 Hz. This is so intriguing because the contents of the file are still audibly apparent, but artifacts of the file being played in such rapid succession are just starting to flirt with creating the effect from the high end of the keyboard. It is really a sound like none other.
Computational Physics Algorithms
Winter Semester 2013
At the beginning of 2013, I had the immense pleasure of taking Mark Newman's Computational Physics class. In that class we implemented a ton of seminal physical and mathematical algorithms. Here are some of my favorites.
Monte Carlo Integration
Evaluating the following integral:
LabVIEW Monophonic Synth
Prior to every major release, everyone who works on LabVIEW has a whole day to use the product to make whatever they want (and also file bugs). During this year's test day, I made a simple monophonic synth that you play with the (computer) keyboard.
I also implemented the Karpus Strong plucking algorithm in LabVIEW. Here is a video demo of that and some Chebyshev filters:
A scripted version of Steve Reich's famous 1967 piece Piano Phase written in ChucK.
One day in April, I was thinking about the canonnical "monkeys on typewriters" thought experiment and wondering how one would test this if one had an infinite supply of time and patience. I whipped together this script, taking advantage of the fact that π is transcendental as a stand-in for true randomness. I know, I'm truly solving the world's toughest problems. Requires NLTK and lot of digits of pi.
Here's a sample output from the script above. Really groundbreaking stuff:
I was lucky enough to grab legendary bassist Joe Fee for an hour-long session of duo music on my senior recital. This is music without a net. Check it out below:
Dance Related Arts
This was a multidisciplinary project I did with 6 dancers and two other musicians. Together we created a 12 minute original dance piece that was inspired by Philippe Petit's 1974 World Trade Center high-wire walk and also Man on Wire. I composed three songs for this work, the last of which I perform whilst "dancing." You can see me at the 8min mark onward:
GRLMTN is a loose collective of Ann Arbor based musicians. The collective releases yearly mixtapes featuring many of the artists in the collective. I contributed the first song off of AAural II, GRLMTN's second annual mixtape.
For my junior recital, I had the opportunity to play with some of my favorite people in the world and amazing musicians: Andy Warren, Jaren Strandlie, Ben Rolston, Ryan Proch, Woody Goss, and Glenn Tucker. Here are a few recordings:
2010 - Present
Here's an array of assorted one-off recordings and projects that I've done:
A track based on a sample form my guitar.
This is the story of a robot landing on Earth with the intent to destroy. After finding a tamagachi the robot learns how to love and regrets his destructive behavior.
Based on a sample of my roommate being sick.
All made with patches from a 10 yr old, classroom Casio workstation.
Also made with patches from the Casio.
The Voluptuous Neighbors
November 2014 - Present
I just joined a new (oddly named) band in Austin as the lead guitar player. We played our first show at the Rural Rooster as part of the East Austin Studio Tour on Sunday, November 23. Check out some old V-Nieghbs demos and see upcoming shows here.
Upcoming shows in Austin:
Dec. 27th, Hole in the Wall
Jan. 11th, Saharah Lounge
Feb. 7th, Chicken Street Lounge
Somebody - Ryan Wolfe
February - September 2013
I directed, shot, etc, a music video with Ryan Wolfe that you can see below:
MSSAFR - Buck
The brilliant Christine Hucal let me shoot two of her installments of her MSSAFR (Mediocre Singing of Simone Songs in Asian Food Restaurants), only one of which made it off of the cutting room floor. Here's the one that saw the light of day:
George Fellowman Music Group - Mean Girls
George Fellowman is a reclusive fellow. You don't approach him; he approaches you. Well, fortunately, he approached me to do an early video centered on two of his soon-to-be smash hits. This is the first of those songs, Mean Girls:
George Fellowman Music Group - A Walk to Remember
George Fellowman asked me to shoot two videos that were essentially live takes of him performing two of his new compositions. The magic captured that day is not done justice by my camera work. Regardless, here is my best attempt at digitizing the moment: