Sunday, March 30, 2008
CONCEPTS OF MUSIC WEEK 4 GOOD VIBRATIONS
Good vibrations indeed! A meaty lecture to be sure and a thoroughly enjoyable excursion into the workings of sound. What is more confusing for yours truly was the analysis of well... the analysis of sound. I spent some time this afternoon looking at Fourier Transformations in the broadest possible sense. There’s a lot of chunky maths there that I simply didn’t bother with but I noticed in my internet meanderings that the applications of Fourier analysis seem vast, from engineering to digital multimedia. I think the theorem provides a method by which complex wave forms (or functions) can be represented as sine waves of different frequencies which sum the original wave form. This probably answers the question I asked in my Audio Studies blog this week about how complex waveforms are acoustically represented by audio equipment... or does it? How deep my understanding of this concept needs to be is something I’m not certain of at this point in my aural studies life, but I hope to find out. The beautiful harmonic series was explained; this concept I dealt with more easily as I remember it somewhat from high school music and physics both (I should probably remember the Fourier Transforms as well but...) and I enjoyed immensely Stephens manic pipe-twirling demonstration of same. Whatever shenanigans will he entertain us with next week? Great fun.
AUDIO STUDIES WEEK 4 SAMPLING THEOREM
Ah, at last I get my head wrapped firmly around the idea of sampling rate and bit depth, I hope. The concept of bit depth determining the amplitude resolution now makes sense as the amplitude over time determines the wave shape. I presume that the high rate of sampling coupled with the large number amplitude increments possible even at 16BD and 44.1kHz SR, is what allows accurate representation of the very complex sounds (wave shapes) that comprise music, although the graphic portrayals we have been looking at give the impression that everything comes out sounding like a sine wave, ha ha. This is another difficult concept to get used to: how do ears, speakers, microphones and digital representations reproduce complex sound structures? In the analogue realm, surely all things do is vibrate, in the digital they’re just on or off; how then do we hear a full orchestra and its individual components? I think someone brought this up in Concepts of Music this week also and although Stephen explained it to some extent, it’s a difficult thing to visualise (and some of us need to visualise to understand). I confess to having had some difficulty in completing the practical exercise on time this week. I’m unfashionably unfamiliar with the Mac environment and this is slowing me down – the actual software presents no problem for me, just finding documents. Silly, I know. I’ll get used to it in time.
SES WEEK 4 MICROPHONES
An exhausting (but not exhaustive by any means) survey of the microphone collection offered by the EMU. Of particular interest to me was the experiment whereby we set up 3 different microphones for the same source and listened to each independently. The lesson in this experiment was clear: there is no given “best microphone” for any application and different microphones will give better (or worse) results for different sources in different locations and who knows, maybe even different atmospheric conditions! The differences in the microphones’ warmth and colour were far more obvious when applied to a female (higher frequency) voice as supplied by Kristie than they were when we listened to the manly tones of Sam and Joe. I found that interesting to note also. I’ve done some research and found that the cardioid polar pattern is constructed by a sort of “combination” of the techniques used in omni and figure 8 polar patterns, where the shape of the cardioid is determined by the amount of either polar pattern engineered into the microphone. The question of how this is achieved arose during the lecture and it was suggested that this was done electronically within the mic but I get the impression that there is also an acoustic labyrinth behind the diaphragm which determines the final spatial sensitivity. Here’s a couple of links which provide really good microphone theory overviews: http://www.soundonsound.com/sos/mar07/articles/micpatterns.htm
http://ccrma.stanford.edu/courses/192a/SSR/Microphones.pdf. (THIS IS A PDF, AS IT SAYS)
http://ccrma.stanford.edu/courses/192a/SSR/Microphones.pdf. (THIS IS A PDF, AS IT SAYS)
MTF WEEK 4 CRITICAL LISTENING
This week’s session was almost entirely focussed on critically listening to “pop”* recordings and discussing the production attributes of them. Both Daves seem keen for us to understand that there is no “right way” or “wrong way” to produce, in the final analysis the recording either sounds good or it doesn’t, irrespective of the production techniques employed. I should qualfy the phrase “sounds good” in this context, as there are certainly different levels of listening. One wouldn’t necessarily expect the same level of critical listening from the average pop music CD purchaser as one would from a professional sound engineer. So why are we striving for maximum quality in every project when the final market might be say, AM radio? I think it’s because no matter what market you are producing your recording for, you never really know what system the music is going to be played on. So the Kylie recording which sounds OK on your crappy home hi-fi may not so sound so good on your extremely expensive professional sound system. Also, if someone is paying you top dollar to produce their music, then surely you owe them the best that you are capable of. The other lesson I learnt this week is that going to a reputable studio is no guarantee of top quality production, there must be more to the process than simply spending money and this is probably where the producer, and specifically the producer in partnership with the engineer, comes into play. If the band in question has no producer and has a vague idea of their own role as self-producers, then the end result might not be all that they are hoping for.
*I use the word “pop” here in the broad sense of “popular” music of all genres.
*I use the word “pop” here in the broad sense of “popular” music of all genres.
Tuesday, March 25, 2008
CONCEPTS OF MUSIC WEEK 3
As I sit here blogging away (oh, what a revolting concept), I am conscious of the fact that I didn’t wear my ear plugs at band practice last night and my ears are ringing merrily away – shades of Quasimodo… “the bells, the bells!”. I now have transient-tinnitus phobia and will never so much as set foot outside of an anechoic chamber again without being equipped with industrial-strength hearing protection. On the subject of which (great segue, me), wasn’t last week’s field trip cool? What a wonderful experience! My only regret was not being able to hear my heart beat in the anechoic chamber from which I conclude either; that I am the walking dead or I was surrounded by giggling sound engineering students. Alright, I giggled too. Next the tremulous thrill of the lethally revolving reflector in the echoic chamber, so reminiscent of some James Bond slow-death scenario presided over by a cackling madman (who somehow doesn’t have time to hanG around and make sure JB actually dies). The study of sound as an abstract is becoming increasingly enjoyable to me, balanced as it is in this course by the study of audio in very physical detail. In closing, I mentioned to (visiting from interstate) Dad that I’d had an anechoic experience which launched him into reminiscences of his time at Bavarian Radio immediately post WWII. He recalled that not only did they have an anechoic chamber, they apparently had a “concrete room in the cellar with a giant steel plate”… this week’s quiz is: What was the room used for?
AUDIO STUDIES WEEK 3 EDITING
And so we get gritty with the nitty at last. I’m not sure I’ve quite got my head around the sampling rate and bit depth concepts. I’m happy with the verbal definitions but the visual presentation via the diagrams Christian furnished leaves me confused. Oh, and the idea that each sample/ snap shot is a representation of the sound’s amplitude at that time… surely there should also be some information as to the frequency and other characteristics or you would just end up with a digital representation of the amplitude envelope of the sound? As far as using the software goes (Bias Peak LE again), I have no issues with that. It seems like fairly straightforward editing analogous to video or even word processing when you get down to it. It’s funny how computers have enabled us to do pretty much the same things over a large range of applications, I suppose that’s the advantage of “non-linear” access but it still seems two dimensional in some ways. As to the pros and cons of analogue versus digital media, personally I had never given it much thought before. One just accepts the technology as it happens on the assumption that it must be an improvement on what was available before. Upon analysis, it seems that this assumption works in the field of audio and sound engineering, at least within the realms of technology which can be afforded by mere mortals such as myself.
Monday, March 24, 2008
MTF WEEK 3 THE BIG BAD RECORD INDUSTRY
More discussion about the respective roles about the music producer and the studio engineer and I must admit that, as an (alleged) artist, I find the idea of someone meddling in my musical affairs quite repellent, though perhaps I would feel differently if I was being paid in 6 figure amounts for my music. So, we understand that the producer has a great deal of creative control of the project content while the engineer is relegated to the task of making everything sound good, sometime hampered or assisted in this process by the producer once again. However, the jobs cross over to some extent and, in smaller recording projects particularly, it is not uncommon for the engineer to wear the hat of the producer and even the humble artist to assume some responsibility for the content of their own CD, wow. The subject of royalties is one of perpetual interest to me as I’m always trying to find ways of getting some. The trick in the music biz seems to be to write hit songs, and have some big name perform them (now that I know this, I expect to get rich as soon as I stop blogging and start writing songs again). The song writer gets the sales royalties, the performer gets performance royalties and there are mechanical royalties to be divvied up as well. Where the hell’s mine?
SES WEEK3 SIGNAL PATH AND GAIN STRUCTURE
OK yes, here am I still giggling about the concept of the wall bay as a “wallaby”, particularly given the context of the EMU studio. What led you astray, Liana? But I’m like that. So we discussed signal path – it’s the path the signal travels, apparently (I would never have guessed) but we kinda watched its progress through the wallaby and into the Studio 2 Pro-tools set up. This followed by an attempt to stimulate the studio monitors into action given that they had now been apparently permanently wired into the master output (which seems like the logical thing to me and I wonder why they weren’t before?). Also, the control room wallaby has apparently been repatched and relabelled, specifically to embarrass David while he is trying to maintain the flighty attention of Sound Engineering students. It was a good practical exercise in tracking down problems, though I still reckon that 6 people at a time in the control room is enough. I can’t see or hear what’s going on. We mic’d up the ever present crappy radio and obtained the desired effect at the end, while I gained some useful insights into other people’s musical tastes, ahem. Gain structure, well... the cut or boost applied to the signal as it jaunts along its path. Both of this week's concepts are crucial to successful audio production, both live and in the studio, it seems to me.
Sunday, March 16, 2008
CONCEPTS OF MUSIC - WEEK 2
More definitions! I notice a relationship between concepts that we had discussed in Audio Studies to those discussed in the Aural component of this week’s Concepts of Music lecture, namely: sound as a physical event and sound as a subjective experience (psychophysical). It’s becoming clear to me that sound engineering involves finding a working balance between those definitions of sound within one’s professional life and that good sound engineers are possessed of both a useful pair of ears, as well as a fine sense of auditory discrimination and the ability to truly hear and analyse perceived sound. I found it interesting to examine the supposition that the relationship between quantifiable and qualifiable sound is a non-linear one: this makes sense and is one of those things that one probably knows at some instinctive level but has no reason to expound upon (until one embarks on a potential career as a sound engineer anyway). As Stephen said, the ear is the final judge but the tools in the studio environment can make the engineer aware of changes in sound that he or she cannot otherwise detect. I wonder how useful this is? If the engineer, with their supposedly refined hearing skills, cannot detect these sonic variations, are they of any importance, given the supposition that most of the sound being produced in the world is designed for human beings to listen to? Nathan (reading over my shoulder) suggests that minor, undetectable variations in the components of the mix may have a cumulative effect, which when compounded become noticeable. Leave it with you....
AUDIO STUDIES WEEK 2 WHAT IS AUDIO?
In these first 2 weeks the focus is around getting definitions and differences sorted out: physical vs. psychophysical sound, listening vs. hearing, analogue vs. digital etc. This week’s audio studies class delved into almost Einsteinian regions of wave vs. particle natures, where the analogue audio signal is represented as a continuous stream of audio information while the digital signal manifests itself as a series of discrete, numerically represented “bits” of data, replayed at such speed that the ear cannot detect the individual sounds. It’s interesting that the ear is more discerning in determining individual events than the eye, as per the example Christian gave of video/ film. I believe the frame rate of film is something like 24 frames per second and I wonder what music would sound like if we captured the sound at a rate of 24 times per second and played it back. Is that how it works? Is that what the “bit rate” in Pro-tools refers to? If so, I think the rate was around 44.1 kHz, ie: 44,100 samples per second?
We also looked at hardware and software and mucked around with unpatched hardware in the lab. My philosophy on software is: just do it. Usually, once you get your hands on the mouse and are surrounded by a bunch of people with various degrees of experience in the application, things become clear fairly quickly. Today’s piccy is my audio studies doodle, I’ll try to include something more relevant next time.
We also looked at hardware and software and mucked around with unpatched hardware in the lab. My philosophy on software is: just do it. Usually, once you get your hands on the mouse and are surrounded by a bunch of people with various degrees of experience in the application, things become clear fairly quickly. Today’s piccy is my audio studies doodle, I’ll try to include something more relevant next time.
Saturday, March 15, 2008
SES WEEK 2, SEETING UP AND PRO-TOOLS
This week’s lecture emphasised the importance of planning a recording session in advance, down to the nitty gritty. I suppose it’s obvious in retrospect that to arm yourself with information regarding the number of pieces to be recorded, the instrumentation for each, the musicians names, the availability and type of equipment on hand at the studio and more, can only result in the recording proceeding smoothly and producing a better result... as a result. Some of the information David recommended gathering wouldn’t have occurred to me naturally, particularly: designing the track placement, room placement and recording schedule in advance, but I see the sense of it. I also think that, as an engineer, one would need to adjust one’s research to the scale of the proposed project, ie: there’s not a lot of point in spending hours following a band around at rehearsals and gigs, and pumping them about their favourite beer etc if this same band is planning on spending around $500 to thrash out a 3 song demo in your studio over the course of the day (including overdubs and mix-down!). Common sense prevails (probably). The next part of our workshop involved designing a (band) recording session and setting-up the project in Pro-tools. I’d be happier if there was a projector or something feeding off the Pro-tools monitor as I couldn’t see a damn thing David was doing, however, the software seems (fortunately) fairly straightforward and I’m anticipating having a bash at it.
MTF WEEK 2
The main areas covered in this week’s forum embraced 2 aspects of the pop music recording process: what makes a pop song and what constitutes a good recording? I have (co-) written over 100 songs and hope that no-one EVER refers to them as pop music, however, I recognise that there is a formula, with variations, inherent in popular music and present in my own work. Philosophically, I wonder if a pop song HAS to be structured this way to be “popular”. Is it necessary to assume a “lowest common denominator” in our target audience, ie: are the punters really so stupid they can’t deal with any other musical format? Leave it with you. (NB: I remember lessons on music structure when I was in high school and I THINK the technical term for a song made up of verses and choruses is “strophic”.) Regarding recording process, we mused at length about compressors which equipment I have heard of but never really understood the function of before. Great! The underlying premise of quality recording seems to be one of extreme common sense: to ensure high quality recording, you must use high quality instruments, mics, signal processors and recording devices and position your sources with care and thought. Actually, that’s probably a bit simplistic and I’m sure isn’t always the case! The above is a screen shot from Cool Edit of a wave on which I whacked some compression willy-nilly (left channel only) for comparison).
Tuesday, March 4, 2008
SES WEEK 1 MIC LEADS AND STANDS
Today's lecture was good news. I suppose it put the whole Diploma in context for some of us who are perhaps feeling a little bemused by the Uni environment, which can leave one overwhelmed with procedure. However, we broached the confines of the studio and lo, we now have a justification for that which has come before and a sense of anticipation for that which is to follow (I'm being subjective and reflective, not to mention silly, here).
On a practical note, what can compare with the satisfaction of watching a smoothly uncoiling microphone lead trail obediently behind one across the floor, knowing all the while that one has caused this poetry-in-motion to be? To experience at first hand the workings and mechanical ingenuity of a precisely engineered microphone stand - so distant a cry from the, dare I say it, crap that one is so used to employing in the course of one's employment as the lowest quality of live musician. Finally, the tremulous thrill of clutching $6000 worth of Neumann microphone (spelling uncertain) is a highlight I shall certainly be sharing with my nearest and dearest.
It's a shame that such a long school day precedes the studio session - I for one would love to approach the studio with a fresh brain (my own), but c'est la vie and whoopee, "we gonna have some fun tonight". (That was my first, last and only un-serious post, promise)
MTF WEEK 1 INTRODUCTION
Hi everyone - this is just me checking out this blog biz which is probably totally old-hat to everyone else but fun for me - yay!
Here's an amusing pic one of my children snapped of me at a "spooky" birthday party (I'm just checking out this embed piccy stuff, OK?): note the scotch n dry in left hand....
Subscribe to:
Posts (Atom)