Saturday, July 08, 2006

disarm


I used to be a little boy
so old in my shoes.
What I choose is my choice;
what's a boy supposed to do?
The killer in me
is the killer in you.

Sunday, June 25, 2006

the only digital synth i will ever want.

Neuron image


The Hartmann Neuron. $3700 of neural-net powered resynthesis. Read about what it does below (from SoundOnSound.com).


What Is Resynthesis?


The Neuron generates its sounds using a form of resynthesis named 'Multi Component Particle Transform Synthesis' by designer Stephan Sprenger. However, resynthesis has already been with us for a long time.


Many people have suggested that the PPG Realiser (born 1986, died 1987 when the company folded) was the first commercial resynthesizer, but I view it as more of a modeling synth, similar in concept to today's virtual analogue synths. Nonetheless, there was one true resynthesizer announced in the 1980s. It was the Technos Axcel.


The basis of the Axcel was radical at the time, although it is far more widely understood today. In short, the system loaded a sample, analyzed how the frequencies that comprised it changed during the course of the sound, and then rebuilt a close approximation to the original using a bank of amplitude-modulated digital oscillators. Of course, you could have done the same thing using an enormous additive synth, but you would have had to define incredibly complex multi-stage envelopes for every frequency contained within the sound. Thankfully, the Axcel did this for you.

One advantage of this form of resynthesis is that the model of the sound can be much smaller than the original sample, and it becomes even smaller if you are prepared to compromise the accuracy somewhat. The second is that you can manipulate the parameters of the model to create new sounds based on the original, warping it into completely new timbres, or retaining enough of the original to be recognizable. Depending upon the complexity of the system, you can also perform tricks such as formant detection, which enables you to transpose sounds over a wider range with reduced munchkinisation. Furthermore, whereas short samples turn into brief blips at high pitches, the sound regenerated using a model can be extended in ways that cannot be achieved when replaying the original.


You might expect the Axcel to have been extremely basic compared to today's resynthesis systems, but it was not. It offered 1024 multi-waveform 'harmonic' generators, with 'intelligent' 1024-step pitch envelopes, plus similarly 'intelligent' volume envelopes and amplifiers. After resynthesis, the output from the Axcel's sound generator was passed through a pair of multi-mode filters, and you could affect aspects of the sound using 'intelligent' modulators, all adjustable in real time. If this sounds familiar, I'm not surprised; it is in essence the structure of the Neuron. Indeed, it's uncanny how much of the philosophy behind the Axcel is evident in the Neuron... not just the signal path, but even the real-time modification of the models (performed on the Axcel using a touch-sensitive screen rather than joysticks).


Things have moved forward considerably since 1988. Resynthesis is no longer the mystery it once was, and numerous hard- and software synths offer some form of it. Likewise, the science of resynthesis itself has progressed, and Stephan Sprenger's system goes way beyond building simple FFT models.


There's no reason why resynthesis should be based on sine waves, and any number of alternatives exist, each with individual strengths and weaknesses. This then leads us to the aspect of the Neuron that — if Hartmann Music's claims are to be accepted at face value — makes it different from other resynthesizers. Instead of using a single type of model to analyze and recreate all the sounds presented to it, the Neuron (or, rather, the Modelmaker software that creates the models) uses a form of processing called a Neural Net to create a unique model for each sample, such that it can be recreated and manipulated within the synth.


As discussed in the main part of this review, the natures of these models are not completely free on the Neuron, but are constrained by the 10 parameter sets provided by Modelmaker. Nonetheless, these provide significantly greater freedom than was available using the Axcel's single frequency/amplitude analysis. What's more, rather than create a single model for an evolving sound — which may be appropriate for some moments, but not for others — it is claimed that Modelmaker is capable of generating an evolving set of multiple models that 'morph' smoothly from one to the next. Confused? Then imagine a drum loop that contains kick drums and cymbals playing at different times. Clearly, a single model would be less than ideal for resynthesizing this, so the idea of creating several sub-models and stringing them together is sensible.


Unfortunately, no-one at Hartmann is handing out any information regarding the exact nature of the models generated for the Neuron. This makes it virtually impossible to test the veracity of their claims. Nonetheless, it's not difficult to understand what's going on, at least in part. Take, for example, the sample of my flute that I created when investigating Modelmaker. A suitable model derived from this sample should contain information about the frequencies contained in the note, the relative amplitudes of the tonal and noise components, the positions of the formants, the overall frequency response, and the perceived size of the cavity within which the sound acquires its unique timbre. If these are modeled successfully, you could then attach parameters to them, with names such as Low Turbulence, High Turbulence, Warm, Cold, Large, Small... and so on, each controlling an aspect of the resynthesized sound. And that's what happens when you use Modelmaker and the Neuron.


Some people have questioned whether the Neuron really is a resynthesizer, or whether the internal drive is holding samples that are mangled in some way by the synth engine. To a large extent, this is Hartmann Music's fault. By inventing meaningless terms, they have obscured many aspects of the synth.

The facts are these — after invoking Modelmaker and asking it to process the source samples, a minimum of four files are produced: 'Mname', 'map.script', and as many Scapes and Spheres as are appropriate. 'Mname' is a text file that contains exactly what you would expect: the Model Name. The 'map.script' file is another text file, and contains the information about which source samples were used where, and what user-defined parameters you have applied.


The Scape and Sphere files are much larger... indeed, the Spheres are much larger than the original samples. Whether these contain the original sound or not is open for debate. On Hartmann's web site at www.hartmann-music.com/home/us/ neuron/soundengine/soundengine_basics.html, it says that "these models contain the actual sound", while at www.hartmann-music.com/home/us/ faq/#5 on... umm... Hartmann's web site, it says that "after analyzing the sampled audio data you feed into it, the samples are discarded and only the model information is kept". Who's right? Hartmann or Hartmann? You tell me.


Blogged with Flock

Friday, November 11, 2005

don't have sex with someone you don't love enough to have married.

not because it is against God's will—though it is.


not because of STDs—though they are dangerous.


because it twists, perverts, and destroys the relationship. even if you are not morally convicted of your sin, it makes it hard to communicate, verbally and emotionally. it puts a premium on time spent in rather than with each other. it draws the focus to the wrong place, and ultimately it draws you away from God.


my credentials are personal experience and the experiences of several very close friends.

Monday, November 07, 2005

islam is a peaceful religion.

That's why its adherents murder their daughter's boyfriend.
That's why thousands of young Muslim men in France have been conducting a street war against the entire nation, sparked by the deaths of two of their number through no fault of the government—and stone the firefighters trying to save them.
That's why they set old ladies on fire.

Tuesday, September 20, 2005

omg sushi.

Just had my first sushi with Jamil, Aaashley, and Steve, and it was amazing.
Jamil goes to the place we went a lot, so the chefs know him—and know he leaves generous tips—so they made up something called a samurai roll for us on the house, and it was delicious. Crab, pickle, crunchies, a creamy sauce that makes soy sauce unnecessary, and no seaweed. Pure yum.


Also ate tuna roll, so I got my I-eat-raw-fish cred all in order, and baked salmon, which was like the meat equivalent of candy. Really full flavor, lovely texture, a sweet glaze that complemented and enhanced the fish incredibly well, and minimal seaweed.


Oh, and did I mention it was free, cos Steve doesn't like raw fish and Jamil is incredibly generous? Total score.


A win, I say.

drafting.

Just got done with my first drafting project for Theatre Graphics, which is due tomorrow at 0930. It was arduous, to say the least, and I'm ashamed of my inability to mold my handwriting into something regular and precise. It's so close to being a really nice, stylized USITT alphabet, but it's irregular, not just in size, but from instance to instance of each letter.
*sigh*


You'd think a typographer/graphic designer/calligrapher/handwriting engineer/language freak would be able to do this, but no. I still throw Cs in there that are a full three points bigger or Ts that have descenders like I bought a thousand shares in C&T, Inc.

Friday, September 09, 2005

i miss you.

a lot.

Wednesday, August 24, 2005

what can't linux do?

I just played a Chemical Brothers video on my iPod.
Let me say that again, so I can be sure you understood.

I just played a Chemical Brothers video on my iPod.

Here's how.