Metallica Garageband Settings

GarageBand has been a breakthrough for aspiring Mac musicians. Thanks to its simplified interface and hard-to-beat price ($49 with Apple’s iLife ’04 suite), even novices can start recording digital masterpieces. But things can go wrong in GarageBand—missing loops, silent instruments, and so on. Here are some recipes for solving the most-common GarageBand problems.

James Hetfield’s Guitars and Gear. James Alan Hetfield (born August 3, 1963 in Downey, California, United States) is the rhythm guitarist, co-founder, main songwriter, and lead vocalist for the American heavy metal band Metallica. Hetfield co-founded Metallica in October 1981 after answering a classified advertisement by drummer Lars Ulrich. Metallica - Master of Puppets Interlude on iPhone X using GarageBand application for iOS only.I will also present the interlude solo from Metallica #MasterOf. Getting the most volume out of your mix actually comes from mixing the frequencies, not the volume. Turn the analyzer on and make note where the peaks are and when they start falling off with each track. Now make sure you are spreading the mix out. Do not crowd your bass and bass drum around the same frequency and volume (gain). Download Electric Guitar Songs for iOS to learn To Play 267 of your favourite Electric Guitar Songs with this series of easy to follow tuitional videos.It's simply a must for all guitarists. After watching a bunch of this guy's covers of Metallica tunes I asked him about his settings. Always wondered why I couldn't get as good of tone out of it. Well, as you'll see many of the settings seem crazy (IE many with treble at 0) but dang do they sound great and I've confirmed with mine. Anyway here's the video.

Your Loop Browser Is Empty

Adding loops to GarageBand isn’t just a matter of stuffing them into a folder on your hard drive. You also have to make GarageBand aware of them. To do that, you must force GarageBand to index new loops, thereby building an internal card catalog of the loops and their locations.

If anything goes wrong with GarageBand’s loop index, you may discover that all the buttons in the Loop browser are dimmed and no loops appear in its list. Some loops may exist in name only.

In these situations, the solution is to rebuild the GarageBand loop index.

Step 1 Quit GarageBand. In the Finder, open your hard drive and go to Library: Application Support: GarageBand: Apple Loops. Inside, you’ll see the text files that constitute your current index. Drag these files to the Trash.

Step 2 Restart GarageBand and click on the Loop Browser button (which looks like an eye). GarageBand will display the No Apple Loops Found dialog box. Click on OK and return to the Finder.

Step 3 Open your hard drive and go to Library: Application Support: GarageBand. Drag the Apple Loops folder from the GarageBand folder into any visible portion of GarageBand’s Loop browser. GarageBand will dutifully re-create its loop index, based on the current location and contents of that Loops folder.

Step 4 If you’ve installed the Jam Pack, drag its folder (also in Library: Application Support: GarageBand) into the Loop browser, too.

No Sound from a Mike or a Real Instrument

If GarageBand doesn’t seem to be “hearing” anything from your microphone or line input, the Mac probably hasn’t been taught to listen to the right audio source. Open OS X’s Sound preference pane and click on the Input tab. Select the sound source you want to record, and then adjust the Input Volume slider for your singing or playing. (The loudest notes should briefly illuminate the rightmost bars.) Return to GarageBand and open its Audio/MIDI preferences. Click on the Audio Input pop-up menu and make sure that the correct input is selected.

If the problem persists, try these steps:

Step 1 Make sure that you’re working with a blue Real Instrument track. If a green Software Instrument track is selected, GarageBand ignores your microphone or instrument.

Step 2 Make sure that you haven’t muted this track or soloed another one. (The speaker icon in your track’s header should be blue, but no other instrument’s headphones icon should be lit up.) While you’re at it, make sure the track’s volume slider isn’t set to zero.

Step 3 GarageBand may be set up to listen to the wrong channel. Double-click on the track header and examine the Input pop-up menu and Format (Mono/Stereo) settings.

Step 4 If you’re using a mixer or an audio interface, make sure that its volume is turned all the way down.

Step 5 Check whether you’re hearing any sound from the instrument. If it’s electric, make sure that it’s turned on, with the volume turned up. Also make sure all the connections are good—especially if you’ve incorporated plug adapters into the mix.

Step 6 If you’re using a Griffin iMic adapter, make sure that you’ve connected the mike or instrument to the iMic’s input jack (the one with a microphone symbol). Also be sure to switch the iMic’s selector toward the microphone symbol. Open GarageBand’s Audio/MIDI preferences and choose iMic USB Audio System from the Audio Input pop-up menu.

No Sound from a MIDI Instrument

Having trouble getting GarageBand to hear an external MIDI instrument, such as a keyboard controller or a MIDI guitar? If so, your MIDI status light can help you identify the culprit.

Status Light Doesn’t Respond If the MIDI status light doesn’t flicker when you play, MIDI information may not be reaching GarageBand. Make sure that the instrument is turned on and connected to the Mac.

If a MIDI interface box is involved, double-check your connections. It’s very easy to get the MIDI In and MIDI Out cables confused.

Open GarageBand’s Audio/MIDI preferences. The MIDI Status line should read “1 MIDI Input(s) Detected” (or however many instruments you have connected). If it doesn’t, your MIDI interface may require its own driver software. Visit the manufacturer’s Web site to seek out an OS X-compatible driver.

Status Light Does Respond If the indicator does flicker on and off, then everything is correctly hooked up. In that case, make sure that you’ve selected a green Software Instrument track. Also check that you haven’t muted this track or soloed another one.

If you still encounter the problem, go to Window: Keyboard to open the on-screen keyboard. Click on a few keys to ensure that the selected track has a working instrument selected. You might also try double-clicking on the track header to open the Track Info dialog box. You should have an instrument and effects preset active in the top two lists. In the Details panel, try turning off your effects one by one until you find the problem. (It’s possible to fiddle with the effects so much that no sound emerges.)

Finally, check whether you, in fact, have any Software Instruments available. Double-click on a track header to see if anything is in the list. If you or somebody else has been doing some naughty playing in the Library: Application Support: GarageBand folder, the files may be so dismantled that you need to reinstall GarageBand to get it going again.

No Sound from External Speakers

Ordinarily, GarageBand plays back sounds through your Mac’s audio circuitry—either through your Mac’s built-in speakers or through speakers connected to its headphone jack. But what if you’ve bought fancy USB speakers? Or what if you’ve connected an audio interface box that’s hooked up to its own sound system? In those cases, you’ll need to adjust your audio options. Open GarageBand’s Audio/MIDI preferences. Click on the Audio Input Driver pop-up menu to select the name of your external speakers or interface box. If you don’t see it listed, you may need to install the appropriate driver—this usually comes with the speakers or audio box.

Panned Tracks Still Play in Both Speakers

In GarageBand, you can use the Pan knob to place a certain track’s instrument all the way to the left or right side of the stereo field. If you do this but still hear the darned thing coming out of both speakers, there are two possible explanations.

The likeliest culprit is an Echo or Reverb effect that you’ve applied to that track. When you select these options for an individual track, you’re not really applying a different echo or reverb to the track. Instead, you’re telling GarageBand how much of the master track’s echo and reverb to apply. As a result, you’ll hear your fully panned track reverberating through both speakers, courtesy of the master echo or reverb. The solution, then, is to turn off the panned track’s Echo and/or Reverb effect. Double-click on its track header, and deselect the corresponding options.

This problem can also occur if you’ve used the Compressor effect. Turning on Compressor for the master track forces all tracks to play in both the left and the right channels. To solve the problem, turn off the master track’s Compressor effect. (Turning off Compressor for an individual track won’t make a difference.)

[ This article is an excerpt from GarageBand: The Missing Manual , by David Pogue (2004; reprinted by permission of O’Reilly Media. ]

If you’re having loop problems, trash your old loop index files.The handy MIDI indicator flickers blue whenever GarageBand is receiving note data.

In this tutorial, I’m going to show you how I use EQ in Garageband, both in the mixing phase, as well as in the final mastering stage at the end, but first, what is it? And what’s the quickest way of going about it?

EQ, or equalization, is the act of adjusting low, middle, and high-end frequencies to improve the sound of an audio recording. To use the EQ in Garageband:
1) Select the Channel EQ plug-in in the Smart Controls
2) Choose a preset from the drop-down menu like “Acoustic Guitar,” or “Natural Vocal.”

I’m going to run through a quick tutorial for how I equalized a song for a client. You’ll be able to see all of the changes I made, and why I made them, and then I’ll discuss some of these concepts in full afterward alongside a YouTube video. Also, since I first made this article, I’ve upgraded to better plugins like FabFilter’s Pro-Q EQ (from Plugin Boutique), which is one of the best EQs on the market for a number of reasons including because it allows you to Solo frequency ranges, but I digress.

by the way, I have a list of all the best products for music production on my recommended products page, including the best deals, coupon codes, and bundles, that way you don’t miss out (you’d be surprised what kind of deals are always going on).

How To EQ Bass, Guitar, Drums, And Synths

Metallica garageband settings for ps4Ps4

1) So, I have my song created in Garageband, which includes several software instruments, including an electronic drum kit, a guitar, cymbals, and hi-hats from an actual drum kit, a bass guitar, and violins.

The very first part of the song that plays before everything else has had it’s EQ adjusted like what is shown in the image below.

It’s a guitar part that I created by playing it back through the Laptop speakers and recording that very sound from my computer (this is the one I recommend from Amazon) with the built-in microphone:

You’ll notice that I’ve scooped out the highs as well as the lows, which has a sort of lo-fi effect. It’s a good way to introduce a song, and you’ll notice it sounds quite similar to the way a flanger works, moreover, you can get even more creative if you automate your EQ like I’ve shown in my other tutorial.

2) For the next guitar, which serves as the main riff for the song, I’ve scooped out the low and sub-bass frequencies, because there is no need for them to be there. The same thing goes for nylon string guitars which I stated in my other guide.

This not only makes room for other frequencies in this area but also has the effect of bringing the guitar part “forward.”

3) I left the snare alone because I like the way it sounds on the default setting. If you spend a lot of time on the producer section of Instagram, you’ll notice other producers talking about how they obsess over EQing the snare.

You don’t have to put a ton of work into something if you think it already sounds good as it is.

4) The next instrument track is the kick, and for it, I boosted the frequencies at 74Hz by +5dB. This has the effect of making the kick much fatter and thicker.

5) Moving on to the Boutique 808s, you’ll notice that I scooped out the sub frequencies starting at around 30Hz, and I also scooped out the frequencies from 1000kHz all the way until 20,000 kHz. That’s because there is no reason for those frequencies to be there. It’s worth mentioning that if you use my favorite 808 plugin, Initial Audio’s 808 Studio II from Plugin Boutique, you won’t even need to change the EQ that much.

Because I used Garageband’s stock 808 plugin instead of 808 Studio II at the time that I had made this article (in fact, I didn’t even know about it at that time), I had to boost the frequencies from 50Hz all the way until 300Hz, by around +1.5dB to +3dB.

6) For the guitar solo at the end, which lasts for the majority of the ending of the song, I scooped out the sub frequencies, as the image shows below.

Then I gave a boost of around +2B at 1000kHz, and then another boost from around 2000kHz all the way until the end. This gives it clarity and a higher-end so the guitar part can cut through the rest of the mix without having the volume turned up.

At this stage, the song has enough of the EQ adjusted. Now, I also use things like compressors and distortion on the software instrument tracks, however, this tutorial isn’t about that so won’t get into it. That’s for another tutorial.

7) Following the EQ for the guitar solo, is the EQ for the bass, which you can see in the image below.

For this instrument, I subtracted all of the higher frequencies, pretty much all of them past 2000kHz. This allows more room for the other frequencies in this area to shine, and it has the effect of having a much thicker bass sound in the song, without being too overpowering.

We’ll move on to the final stage. The final mastering stage.

Mastering EQ Stage

This stage comes after I’ve exported the song to my desktop, then started a new project, and then dragged and dropped the AIFF file into a new project. The EQ as shown below is on the master channel which I explored more in my mastering tutorial.

Truth be told, EQ is one of those things where you want to approach it piece-by-piece. In other words, you make a lot of small changes, which end up making a big difference combined together when the track is finished.

In the image you can see below, you can see that I haven’t changed that much about it.

I just dropped out the sub frequencies a little bit between 20Hz and 40Hz, gave a tiny boost around 90Hz to fatten up the kick one more time, dropped out a few low-mids at 200Hz, boosted at 500Hz by a bit, and then gave a small boost of about +1.5dB to the frequencies between 1000kHz and 18000kHz.

It’s a personal preference of mine to increase the higher frequencies because I like my music to have a very bright sound to it, and that’s the effect that increasing high frequencies creates.

And for a track like this one, it’s perfect because it’s a happy sounding tune in G Major.

And frankly, that’s all for the EQ.

What Is EQ And What Are Some Of The Best Practices?

Now, we’re going to talk about EQ as a general concept.

While it may not seem like that, you’ve probably actually EQ’d a sound before without even knowing, like turning down the bass in your car stereo system.

This action, technically, is equalization, because you’re making the sound more palatable by turning down the bass frequencies. You’re literally “equalizing it.”

It’s probably not probably far reaching to assume that most musicians don’t ask for specific adjustments to their music, for instance, “boost the frequency at 200Hz by +1dB.”

They may have a different way of saying it, like asking for the kick to sound more “aggressive.”

It’s up to us as music producers and engineers, to understand what’s meant by the client’s words.

Over time, especially after working with more and more people, you’ll come to realize that a lot of people will use different, albeit similar, terminology, to describe the same frequencies.

For example, if some want to make the song sound more “treble-y,” that means they want a boost in the higher frequency range, like 1000kHz and up. Increasing this frequency range, as the image below shows, will bring more clarity and more “air” to the song.

Low and High frequencies are described in different ways.

The low-frequency range, from 10 to 200 Hz, will frequently be referred to as “Bass-y” or “Big.”

The higher frequency range, from 5 to 20,000kHz, will be referred to as “Treble,” “Meek,” or “Thin.”

Boosting and Cutting

Boosting a frequency, as the name suggests, means we’re increasing the volume of that frequency. The more proper terminology would say we’re increasing the amplitude of that signal, which is more accurate to what’s actually happening.

Cutting a frequency, as the name suggests, means we’re subtracting that frequency range, so it has the effect of lowering the volume, but really, we’re just decreasing the strength of that frequency.

1) Use Subtractive EQ

The term most commonly used when talking about mixing is subtractive EQ.

Essentially, this means that, rather than boosting frequencies in the desired range, you subtract EQs from another part of the track, which in turn, creates the impression of a boost in the desired EQ range.

Taking the example of the Boutique 808, the vast majority of the sonic frequencies will fall between 50hz and 1000khz, and anything after 1000 kHz will typically be subtracted.

This is also called a Low-Pass filter because we’re letting the low frequencies pass, and the high frequencies are being stopped.

We’ll get into Low and High Pass filters in the next step.

In the image you can see below, for instance, you can see that I’ve subtracted all of the frequencies after 1000kHz, which “creates room” so to speak, for other instruments to shine through.

Moreover, how subtractive EQ takes form in the mixing process, is also similar to volume.

So If I want to increase the volume of the kick, I may actually choose to turn down the bass a little bit, or even the snare and other accompanying instruments that typically fall in that same EQ range.

Now, obviously, you can also boost desired frequencies if you want, as I’ve done in the image above, at 100 and 200Hz. From what I understand, subtractive EQ is how most mixing engineers will tell you to approach EQ, although some likely disagree.

From what I’ve read, Subtractive EQ is a way of carving out different sounds and frequencies so everything can shine together, cohesively.

When I first started using EQ, I found that my mixes sounded terrible and muddy, because I was boosting the frequency of every sound I wanted to be amplified, but the end result was a muddied mix in which all of the instruments were competing for the same frequency, or the “sonic space,” so to speak.

Using the example of Jason Newstead, the bassist from Metallica, he said in an interview once that the reason why his bass guitar wasn’t heard on …And Justice For All is that much of his playing directly followed the root notes of the guitar, so there was too much competition for the same frequencies, which causes muddiness.

Instruments will end up competing for the same frequency, and subtractive EQ is a way of remedying this dilemma.

Using another example, a beginner mixer would probably just add the frequencies he wants, like I mentioned I used to do above, with the most common being the bass, especially in hip-hop.

Rather than thinking about what you can “add,” think about what can be eliminated, and how this would bring attention to the more desired frequencies.

Having said all of that, this doesn’t mean that you have to completely avoid adding frequencies, it’s just the order in which you do so. It’s best to employ subtractive EQ first and then additive EQ after.

2) Use Low and High Pass-Filters.

As I briefly mentioned in passing above, setting up low and high pass filters is a great move for adding clarity and allowing your tracks to breathe.

A Low Pass filter, is when you block higher frequencies and allow low frequencies to shine, and a high pass filter is the opposite, to block low frequencies for the sake of emphasizing higher frequencies.

It’s important to note, however, when employing low and high-pass filters, you may eliminate some of the frequencies that are making the music sound authentic and real, so it’s a good idea to listen closely when setting them, and determining whether your low-pass or high-pass is too strong.

Explained in another way, make sure you’re subtracting unneeded frequencies and not frequencies that are actually making small but ultimately important contributions to the way the instrument actually sounds.

In the image you can see below, I’ve set up a high-pass filter, where the lowest frequencies have been eliminated because the instrument in question is the “tinging” sound of a cymbal, which is a fairly high frequency.

To create as much sonic space as possible, it’s not a bad idea to subtract unneeded frequencies from every instrument track, but as I said above, be careful.

3) Use Low-Cut and High-Cut Filters

A low-cut and a high-cut filter, are the same as the high pass and low-pass filters, they’re just explained in a different way.

For example, it’s not a bad idea to add a low-cut to the guitar, which has less low frequencies and than a high-cut to instruments that don’t have a lot of highs, such as the kick and snare.

Filters are great because they allow us to eliminate unwanted noises and unneeded sounds from our music, permitting the most desired sounds to shine through.

You may think to yourself, “Hey, what’s the difference between the low-cut and a high-pass?” They’re pretty much the same thing, however, the principle and the purpose of employing them is different.

4) Pay attention to the point between 100 and 200 Hz.

This is, by far, the area that has to be watched the most, because many instruments and sounds will have frequencies at this level, including the guitar, the piano, the bass, the kick, the snare, and so on and so forth.

Metallica Garageband Settings Download

It seems like all of the major instruments have a frequency in the 100 and 200 Hz range, so it’s important to pay attention to it and to make subtractions where it seems fit.

Now, let’s talk about different frequencies, sub frequencies, low frequencies, low to low-mid frequencies, mid to mid-high frequencies, and then high frequencies.

Frequency Ranges

Sub Frequencies are below 80Hz.

How to play metallica on garageband

This is perhaps the most commonly discussed frequency in the general public, because of the term, “Subwoofer,” referring to a common speaker-type that people often put in their cars to make the bass as loud as possible.

These frequencies can be particularly destructive if used too much.

Crank the frequencies in your mix at 80Hz and below, and then play the track in your car and you’ll quickly find out why. The bass will be so overpowering that it’ll sound terrible.

Low Frequencies are between 50 to 200Hz

Frequencies between 20 and 200Hz have the tendency to make the sound of music sound a lot thicker and bassier. The human ear is not the greatest at hearing these frequencies, a common reason for why guys make beats and songs with bass-lines that are off key.

A quick way of getting around this, by the way, is by shifting the music up by one octave to see what it sounds like, or by changing the software instrument track to a piano or bass guitar to hear the notes better.

A track with a lot of low frequencies will always have a thick and bassy sound to it. It’s definitely easy to overdo it in this area, so pay close attention to what you’re doing.

Low-Mid Frequencies are between 200Hz to 300Hz

As I mentioned in passing above, this is the “muddiness” frequency, so it’s important to watch how many instruments and sounds are taking up this space.

For that reason, it’s not a bad idea to use subtractive EQ to eliminate any unwanted frequencies in this area, to create as much room as possible.

Freeing up space in this area will help clear the air for guitars, flutes, pianos, and vocals.

A technique people often use when dealing with this frequency is to subtract at 200Hz or so by around 2-3 dB, not a lot, but just enough to allow some space to breathe.

Mid Frequencies are between 300Hz to 700 Hz

Metallica Garageband Settings For Android

This frequency range is a place where even more subtractive EQ is used because there is a fear of creating a “hollow” sound.

Apparently, this area is great for easily the most common and prominent instruments, like the saxophone, the cello, bass guitar, snare drums, bass drums, and male vocals. This the area where the depth comes from, and without it, the sound will have a “shallow” effect.

Male vocals, bass drum, snare drums, bass guitar, cello, saxophone, and some woodwind instruments exist here. This range forms the base of music and a lot of instruments have frequencies in this area, including the piano as well, which is arguably the most important instrument in music production due to its use as the MIDI keyboard (one reason why I recommend familiarizing yourself with it via PianoForAll – one of the best ways to learn).

There is no depth in music without these instruments and the frequencies they produce, moreover, I find that if you use better quality instruments like Komplete 13 from Native Instruments, you don’t need to change much in the way of EQ.

Upper-Mid Frequencies are between 1.5 to 4 kHz (1000 – 4000 Hz)

This is a sensitive area, even more than most because it’s the one most easily heard by the human ear. To increase the frequencies in this area, will make it seem like the music is more “in your face.” This frequency range has an aggressive quality to it.

Taking the example of guitar tone to illustrate my point, increasing frequencies from 1000 to 4000kHz will have the effect of adding “crunchiness” or “bite” to the sound.

This is my favorite area to adjust when working on the guitar sound because it’s where you can get that solid cutting guitar tone. Too much frequency in this area, according to Timothy Dittmar, can cause ear fatigue.

Mid to High Frequencies are between 4 to 10 kHz (4000 to 10,000 Hz)

This is the frequency range most commonly attributed to the words, “Clarity” or “Presence,” and the “Presence” knob on a guitar amp, for instance, is the adjustment tool meant to increase the overall “breathiness” of the sound.

Vocals are usually within this area as well, and we can add a bit of a boost in this area to allow for the vocal track to sit nicely in the mix.

In other words, a frequency boost in this area will help the vocals cut through the rest of the track.

High Frequencies are anything above 10kHz (10,000 Hz)

The frequency most commonly used to increase the “brightness” of a track. An increase in this frequency range will allow instruments to cut through the mix as well.

Other words meant to describe it typically have to do with sunshine and other outdoorsy features, like “sparkly” or “sunny.” Lo-Fi, for instance, is completely devoid of these frequencies.

If you’ve ever heard Lo-Fi music, you’ll know that it has the quality of sounding like it doesn’t have much brightness.

It’s because there aren’t a lot of high frequencies in the music, and the vast majority of the sound is within the low-mid to mid-range.

For more on this subject, I recommend you check out Timothy Dittmar’s book, Audio Engineering 101. He does a good job of explaining things. You can check out his Twitter and his book on Amazon here.

YouTube Video

Watch the tutorial on YouTube where I run through all of the concepts I just talked about here.

Conclusion

That’s all for my article on EQ. Do me a solid and share this on your social media to all your producer friends. Also, take a look at my recommended gear page for more products that are great for music production.