creating visuals of interactive memory nodes in VisualJockey

:: ::
psymbolic visuals's picture

creating visuals of interactive memory nodes in VisualJockey the realtime animation software that's captivated our imagination since 01.

pushing pixels & re:listening to Bios & Logos by Mark Pesce

:: ::
troy's picture

pushing pixels & re:listening to Bios & Logos a very informative session by Mark Pesce an early pioneer in Virtual Reality.

Bios & Logos in text below thanks to Future Hi

A constant challenge I have when writing about the future, especially the singularity, is we are talking about something that by it's own definition is incomprehensible within our human linguistic framwork. I'm constantly searching for words, pictures, ideas, metaphors and models to better grasp whats coming.

According to Mark Pesce, creator of VRML and many other things, this appears to already have an historical precedent.

Things may look as though they’re going fast now, but this is nothing – literally, absolutely nothing – next to what’s about to happen, because (and now we have precedent for it) we’re about to see a technological acceleration on a similar order to the acceleration we saw when the logos separated from the bios. In this case, techne, our ability, is about to be freed from logos, our ability to describe it.

Below is the most interesting description (by Mark Pesce) that I have ever read about the singularity.


The Dawn of Life

To have a discussion of the origins of life on planet Earth, I need to discuss two fundamental texts, books that I would encourage you to read.

The first of these is John McFadden’s Quantum Evolution, in which he takes a look at a hitherto unresearched field – how quantum mechanics influences molecular biology, and, in particular, the functioning of DNA.

DNA enters a superpositional state – that is, it enters as many as ten-to-the-500th universes, in order to find a situation where it can produce some situation where it will entangle itself in the physical world – in other words, can sustain itself.

The improbability of life thus comes to rest on a firm foundation of physics, the first time there’s been any hint that our understanding of the world can help us understand one of the great mysteries of the world – how life came to be.

The second book is the recently published A New Kind of Science, by Steven Wolfram. Wolfram may be the Issac Newton of our generation (people are still debating this point, and will for the next hundred years).

Wolfram defines something called the Law of Computational Equivalence. We think of physics as being composed of formula, such as e=mc2, or F=ma or pv=Nrt. While Wolfram doesn’t call the validity of these formulae into question, he does insist that they’re not enough to describe the reality of the physical world. In addition to these formulae, there are processes, outcomes which can not be predicted in a simple, mathematical fashion, but rather are more like computer programs which need to be executed before their results can be known.

The difference between the world before Wolfram and the world after is the difference between Newton and Darwin.

Newton saw the entire world as a giant clocklike work of machinery and gears, together working seamlessly to create the physical universe.

Darwin envisioned the world as a collection of processes, working through time, to create the nearly infinite variety of forms which populate the natural world. Without process, there is no model for evolution; organisms do not evolve according to formulas, but rather because of their continuous interactions within the environment.

In his book, Wolfram tells us that this is the new model for physical reality, and we need to apply this model as broadly as possible. Nearly all physical processes of consequence in our world take place not in isolation, but as a consequence of repeated interactions in their environments.

What does this mean about the history of Earth? What we know is this: just about as soon as the Earth had cooled enough to allow the formation of some relatively complex chemical structures, life began. The Earth still had an average temperature of 160 degrees Fahrenheit (!) when life began.
Why could life begin? The quantum evolution hypothesis states that these molecules could search the quantum multiverse of 10-to-the-500th power worlds to find a world where they could sustain their interactions, where they could continue to exist.

We’re talking about creating quantum computers today, which can employ these same properties to crack encryption codes or solve other sorts of mathematical puzzles far beyond us now, but it turns out that nature has probably been exploiting this trick all along! And the latest scientific tests show that these simple molecules can enter that weird quantum world, so, as far as it’s been possible to prove the underlying assumptions of quantum biology, they’ve held up.

Once life popped out of the multiverse, it became subject to the new laws unearthed by Steven Wolfram. Within the environment, organisms interacted in unpredictable ways, and every interaction of every organism on any other organism changed both organisms. Some organisms fought with each other, some combined with each other – for example, the mitochondria which provide the power for your body’s cellular processes are the by-product of such a fusion – and from a simple set of rules, endlessly repeated throughout time, we can actually see the grand sweep of evolution emerge out of the physical processes which undergird nature.

The next four billion years of life could be characterized as a continuous set of interactions between different organisms in the natural environment, and every interaction in every environment leaves an impression – an information transfer – between these organisms. Some or even most of these interactions are nearly insignificant, but some of them concern the life or death of an individual member of a species, and, in those rare instances, that species either becomes extinct or a change is made in the species, recorded in the natural memory of DNA.

DNA is the information – and it’s nothing but information – which is the ultimate arbiter of the forms of the natural world; it’s a form of very slow memory. In each one of you, in nearly every one of your cells, is a memory of all the interactions your ancestors have ever had, from the very first cell, down to the present moment – that present moment being a rather long one – about 150 thousand years.

The Dawn of Man

It’s believed that homo sapiens emerged in Southern Africa, just about 150,000 years ago, and although there are now some contradictions to the “out of Africa” argument about humanity’s origins, it seems that the humans that we are all came from this same place, at around this time, slowly diffusing northward across Africa, and reaching the Eurasian land bridge in the Middle East, and fanning out from there toward both Asia and Europe.

Now although we call these first ancestors homo sapiens – meaning they were genetically identical to ourselves – we don’t think of them as human in the same sense we think of ourselves as human. This is for one primary reason: we don’t see the hallmarks of human culture in these earliest human beings.

What do I mean by culture? Well, until last year, we had though that humanity as we know it began about 35,000 years ago, because we found the representative elements of a human culture. However, last year we found equally convincing proof that this actually extends back at least 75,000 years. It could be that, eventually, we’ll see that humanity-as-we-understand-it goes back as far as homo sapiens itself. Who can say?

We mean culture, in the sense of modern humans, because of the existence of cultural artifacts.

Homo Neandertalis, the Neanderthal who preceded the modern human, had a larger brain than ours, and was stronger and able to survive across a wider range of climates. However, the kinds of artifacts the Neanderthals left behind were extremely crude; very basic stone tools, which did not show any significant evolution over the lifespan of the species.

In other words, while the Neanderthals were completely situated within the natural environment, their adaptation to it happened just once, and then stopped.

So now we come to what makes us human: the history of homo sapiens begins, some 75,000 years ago, with some etchings on a piece of rock, nothing more than a series of wavy lines. This may not seem like much, but it’s the first example of decoration.

What is decoration? It’s something that serves no functional purpose – for example, a coat of paint doesn’t change the function of a house – but acts as a signifier of some reality that exists only in the mind of the beholder. In other words, a physical object has become a symbol, standing in for something other than itself.

The thing that separates us from the Neanderthal isn’t brain size, or brute strength, but a symbolic manipulation capability.

In order to have symbols, you need to have a consciousness capable of symbolic manipulation, that is to say a linguistic consciousness. While paleoanthropologists believe that the Neanderthal had some very basic linguistic capabilities, it is believed that these abilities were very limited – perhaps similar in nature to those of a year-old child, capable of identifying objects or actions, but little more.

What we see with homo sapiens is that this linguistic ability overflowed into the entirety of consciousness. The first benefit of this was the emergence of what we understand as language: nearly every human being has an innate capability to take a few symbols and manipulate them infinitely.

For example, although few of us ever use more than about 2000 English words, we can describe just about anything with those words, because we can instantaneously recombine them in any sensible order to create new forms of expression.

That’s what those 75,000 year-old squiggly lines on a piece of stone imply: that our internal linguistic capability, which gave us language, had overflowed onto the material world, and that the material world had been consumed by our linguistic capability.

This is an important point, perhaps the central point I’m trying to make today: everything you look out upon from your eyes, exists less as a physical reality than as a construction of linguistic form.

But there’s another point we need to understand about the consequences of our linguistic capability, because it’s set us on a path toward the Singularity.
Raymond Kurzweil says that his machine singularity is absolutely inevitable because machines can perform computations about 10 million times faster than human neurons can. That’s as may be, but once again I think Kurzweil missed the big story.

For 4 billion years, DNA was the recording mechanism of history, the memory of biology. As soon as we developed language, we no longer needed the slower form of DNA for memory; we could use the much faster form of language, which produced with it a deep sense of memory within the individual – since the linguistic symbols could be contained within the human mind.

Since we became a symbol-manipulating species, our forward evolution, in DNA terms, has come to a dead stop. (This has recently been proposed by reputable scientists.) However, our linguistic capabilities allow us to perform acts of memory much faster than DNA, probably at least 10 million times faster!

So, suddenly, homo sapiens is not just a biological entity working within the matrix of DNA and its slow historical recording, but now bursts through and starts processing its interactions within the environment 10 million times faster than ever before.

That’s a great thing. It’s made us the planetary force that we are today. But there’s a big price we paid for it, a price we’re not even vaguely aware of.

For all of evolutionary time, information had to travel the slow route through biology – through the bios - before it would be coded into our DNA. Now we had this additional process – which we call the logos, the Word – which was a completely new thing, and not something that the bios had any time prepare for.

Because of that, homo sapiens can be identified by one specific characteristic; we are controlled not by the dictates of the bios, but the are dictated by the logos.

From its first recognizable moment, humanity demonstrates an entirely new relationship between bios and logos. Information, freed from its need to be embedded in the slow, dense vehicle of our DNA, speeds up 10-million-fold.

This renegotiation of power, between the previously unchallenged bios and the brand-new logos was not something that the bios was prepared for.

Most likely immediately, the bios was overwhelmed by the logos. The natural environment of the first humans was entirely and utterly replaced by a symbol-driven environment.

The post-modern philosophers claim that this is a new thing, that the Disneyification of the world has overloaded the natural world with the mediasphere. But this isn’t a new thing, even if our recognition of it is; as long as shaman and storytellers have been spinning myths that tell us who and what we are, the world ceased to exist as nature, and became a linguistic element in the story of homo sapiens.

However – and this is the second most important point I want to make today – the logos has its own teleology, its own entelechy, its own drive to some final dwell-state.

We assume that we are masters of language, of word and world.

I disagree.

The situation is exactly reversed. We are not in control of words, they control us.

Evolutionary biologist Richard Dawkins got it entirely right when he invented the concept of “memes,” which can be thought of as the linguistic equivalent of genes. Rather than being part of the bios, memes are the carriers of the logos.

OK, so we’ve covered the emergence of the bios, some 4 billion years ago, and the emergence of the logos, perhaps as much as 150 thousand years ago. Now let’s bring ourselves forward into the world we can recognize.

The Dawn of Modern Culture

I set the beginning of the common era to about 500 BC, because of one particular cultural artifact; Lysistrata by Aristophanes, a Greek comedy about how the women of Athens stop a war by denying their husbands sexual favors. If you’ve ever read the play, you know that the attitudes (and dirty jokes) of these women are entirely modern – it’s as if all of the elements of the modern world are entirely present in the work.

We, as a species, have been driven by memes for the last hundred thousand years, and this has forced us further and further away from any direct connection with the natural world.

It’s not as though modern man has had any choice about his alienation from the natural world, and it’s a fallacy to presume that “primitive” cultures are any more closely connected to the natural world than we ourselves are. They too have completely overloaded the natural world with their linguistic natures – else how could the plants “talk” to them?

There may be many discrete forms of alienation from the natural, but they are, in essence, all the same. And they all point toward the same general trend:

We’re being hollowed-out by our memes. That is to say that our interiority, which is an artifact of the slow, quiet progression of the bios, is rapidly vanishing.

The modern conception of interiority is really a creation of the Enlightenment in Western Europe, and was only noted by philosophers as it was beginning to vanish utterly.

So here’s the central point of what I wanted to come to Jamaica to say: the singularity is absolutely inevitable, and absolutely meaningless. The closest analogy we could make would be the whine of feedback you get when you place a microphone too close to an amplifier. The screech drowns everything else out, just as what we are – as individuals and culture – are being replaced by a rising form of activity dedicated to a single goal: making a clear path for the transmission of the logos. We’re improving the fidelity of meme swapping until it asymptotically approaches its theoretical limits.

And the truth is, we’re so far down that path that we have only a little bit more to go.

There are three cycles that I’ve been able to identify for you over the course of this talk:

  • The emergence of life, 4 billion years ago
  • The emergence of a linguistic species, 100,000 years ago
  • The emergence of a technological species, 5500 years ago.

    Let’s take these one at a time, and see how they’re convergent.

    First, the emergence of life, 4 billion years ago, was propagated through the medium of DNA, which acts as the informational carrier for life. This medium was very gradual, but within the last twenty years, the medium of DNA has been translated into linguistic form.

    Think of the human genome, and the images you may have seen of it, not in the twisting double-helix of the molecule, but in the endless series of A, T, G, and C which make up the base-pairs.

    We have recently come to treat DNA as a code, a linguistic artifact, and, because of that, our ability to understand and manipulate DNA is now undergoing the same 10-million-times acceleration that happened when we became linguistic entities.

    Second, the emergence of a linguistic species caused us to be taken out of nature entirely, and the world became a description of things, rather than things-as-they are.

    Although language sped the pace of novelty substantially, it was still bounded by proximity, and the speed of sound. When, around 1840, the telegraph was developed, the speed of information transfer increased well over a million-fold.

    Marshal McLuhan, the great Canadian media theorist, considered this made the entire human species the equivalent of a single nervous system, but even the nervous system is very slow when compared to electric communication.

    The transmission of facts and ideas became instantaneous, and the speed of the development of novelty followed. When ideas move faster, there’s a greater capacity for them to interact, to produce concrescence.

    The history of the 20th century could accurately be described as a series of advancements in communication, beginning with radio and ending with the Internet, each technology successively colonizing the world, and each more rapidly than the technology before it.

    Third, the emergence of a technological species. Let’s take a good look at that.

    Technological artifacts are concretized language; that is, any technology is a bit of language that has been turned into a physical object.

    The first technology that was turned into a physical object was the linguistic technology itself. Writing is the first real technology of importance, because it freed linguistics from their oral substrate, and made the carrier medium much more durable. We have an idea of history from 3500 BCE forward because of the invention of writing, which has created a continuity in humanity.

    All other technologies, are, each in their way, the descendants of writing. Writing was the exteriorization of our drive to communicate.

    We’ve seen the linguistic acceleration of DNA as codes, and the linguistic acceleration of communication as telecommunication, but we’re only now on the threshold of the acceleration of technology.

    Things may look as though they’re going fast now, but this is nothing – literally, absolutely nothing – next to what’s about to happen, because (and now we have precedent for it) we’re about to see a technological acceleration on a similar order to the acceleration we saw when the logos separated from the bios. In this case, techne, our ability, is about to be freed from logos, our ability to describe it.

    What do I mean when I say this?

    There’s an emerging science, known as nanotechnology, which will, before the next few years have passed by, give us a very fine-grained control over the material world.

    With nanotechnology we should be able to precisely design molecules to order, for whatever purpose we might desire.

    This is the coming linguistic revolution in technology, because, at this point, the entire fabric of the material world becomes linguistically pliable.

    Anything you see, anywhere, animate, or inanimate, will have within it the capacity to be entirely transformed by a rearrangement of its atoms into another form, a form which obeys the dictates of linguistic intent.

    It’s very hard for us to conceptualize such a world, and I have continuously been forced to draw on the metaphors of world of magic for any near analogies.

    It will be as if we have acquired the ability to cast spells upon the material world to achieve particular effects. Quoting Terence McKenna:

    “This downloading of language into objectified intentionality replaces the electrons that blindly run, and replaces it instead with a magical, literarily-controlled phase space of some sort, where wishes come true, curses work, fates unfold, and everything has the quality of drama, denying entropic mechanical existence.”

    This isn’t to say that we’re about to acquire the omnipotence we normally ascribe to God, but that our abilities will be so far beyond anything we’re familiar with today that we have no language to conceptualize them. No language at all.

    And that search for a language to describe the world we’re entering is, I think, the grand project of the present civilization. We know that something new is approaching.

    So we have three waves, biological, linguistic, and technological, which are rapidly moving to concrescence, and on their way, as they interact, produce such a tsunami of novelty as has never before been experienced in the history of this planet.

  • Build It. Share It. Profit. Can Open Source Hardware Work?

    :: ::
    gift culture's picture

    Open source hardware - how amazing and I think the next wave of things to come....  Things like this, the monome, and 3D fabrication really make me excited for all the things to come!

    Adobe CS4 Launch Event Review

    :: ::
    mason dixon's picture

    Well, the strangest thing of the whole event was the CS4 logo. The webscast was overly professional; it looked like a television show, with fancy animated introductions, realtime cut-aways, great lighting and funny head-mounted microphones.

    Overall, Adobe is driving customer expectation. They are making web users expect more from websites, and video on the web is extending website visits to 25 minutes in the case example of BBC, 8 times longer than with BBC’s previous website. They spent a good amount of talk-time on creating communal viewing experiences by moving web content from the PC to the living room entertainment center. Their right of course, TV is being web-ified.

    The new CS4 Collections focus on three work flows. Converting print designers into multimedia designers, cross-program integration with video collection, and the “web” collection, which seems to be the real focus of the company.

    Ok, enough of my cynical banter, here’s the good stuff:

    After Effects:
    + Added Mocha AE, a much better motion tracker
    + Import textured 3D models from Photoshop (see below)
    + Better integration with other video collection apps

    + Wrap large images around 3D models. Pretty neat.
    + Content Aware Resizing, amazing!
    (great video about it:

    + After Effects style timeline
    + IK Bone Systems similar to the Puppet Tool
    + 2.5D animation

    Well, that’s about it. Despite their bad humor the features in Flash & Photoshop are amazing. Someday they will buy a 3D program, then we’ll see some real video enhancements.


    [] a tech injection

    :: ::
    mason dixon's picture

    Apple FCP T3 is coming soon…
    new version of Color & Motion

    If anyone wants to certify in it,
    here is the schedule of Train-the-trainer classes
    they cost $1400 but you get a free copy of FCP T3 ($1200)
    and you can teach FCP certified training classes.

    definitely check out this doc from the current version FCStudio 2
    about how the programs are working together.

    btw, most of the T3 books are being written by Chicago Local, Matt Geller,
    here is his blog:
    go matt!


    Anyone Teaching Flash Should Know What These Programs Do:

    Flex: a java & mxml programming environment that outputs flash animations
    AIR: deploy HTML & AJAX & Flash as a desktop application

    Also btw, Flash 9 new features:

    > After Effect Style Timeline!!!!
    > Actionscriptable plugin-style visual filters
    like photoshop filters but scriptable in realtime
    > and 2.5D… um wow.

    If you are doing any Actionscript 3
    definitely review:

    > the Google Code project Tweener
    > the new RSS syndication library as3syndicationlib
    > the open-source alternative to the Flash Media Server named red5
    > the new 3d engine: Papervision 3D
    > and the new multi-user game engine named PaperWorld3D


    If you are at all inclined towards color correction:

    I’ve just come across 2 excellent books on the topic.

    Dan Margulis is brilliant, and while he is dealing with Photography in the book named Professional Photoshop: The Classic Guide to Color Correction, the applications to film are stunning.

    The Second Edition of the classic
    Ron Birkman’s
    The Art and Science of Compositing
    has just been rereleased.


    Please enter work in the 1-Minute Film Fest

    public display in Harvard Square!


    The Motion Graphics Fest is coming to Orlando

    discounts for members coming soon…
    become a member


    …And from JakeVsRobots’ Twitter

    Forget about Ruby on Rails
    a new horse… er old horse is in town:

    Interview with Mason Dixon about the Motion Graphics Festival

    :: ::
    mason dixon's picture

    How did the Motion Graphics Festival begin? How did it develop its specific focus?

    The festival began five years ago and at that time there was a real need in the film and video industry for a festival that celebrated creativity and graphical innovation without excluding work based on genre. For instance creative motion pictures created for advertising, film, and the internet were highlighted in very different forums, and works not created specifically for those delivery methods, such as realtime (VJ) work or medical motion illustration, had almost no public awareness or exhibition opportunities.

    Is it hard to get a new festival off the ground?

    The people that come to our festival have been craving this kind of work. There were so many artists in need of educational opportunities and people interested in this artform that the festival had a very natural growth. There have been so many volunteers and enthusiasts that the biggest obstacle has been find ways to organize all of the interested people and companies.

    Is most of the work shown in theaters at the festival intended for another environment (the web, a television, etc)? Does this matter?

    It matters a great deal. As marshal mchluan illustrated so well; the method of delivery is an integral part of every piece of content. This has been a curatorial challenge for us; especially considering our cross-genre emphasis. We have really appreciated organizations like Lumen Eclipse because they enable us to curate works in a way that really highlights their genius. For instance Sean Capones work was designed for public video displays in fashion stores. The work is so well done specifically because it works well in this format. It entices audiences but does not hold their attention for the length of the piece. The fact that it doesn’t hold audiences attention after a short time would be excruciating for a theater audience; but on the Lumen Eclipse kiosk it works perfectly. This kind of integration of delivery and content is exactly why genre-specific festivals failed to include so much amazing motion-picture designs.

    In what other venues is work presented?

    Our curatorial program constantly evolves in include what people make with motion-picture tools. Over the last five years we have been able to include most all of the work we felt deserved attention through the following programs: theater presentations, interactive/installation gallery showcases, realtime video concerts, public video/interactive kiosk displays, and online presentations.

    But if someone created an amazing motion-picture designed specifically to be shown on ping-pong balls, you can bet a ping-pong tournament would be added to our program.

    What is the aim of the Motion Graphics Festival, and what do you think is the most important function of festivals in general? Getting new work out there, into the market? Drawing together industry professionals/likeminded people?

    Our mission is three-fold: provide public awareness of motion-picture artistry, provide a critical framework for motion-picture artistry, and create educational advancement and future opportunities for motion-picture artists.

    Who attends the Motion Graphics Festival?

    Our demographics for the full conference program weigh heavily towards working professional artists but also include students. The screening and art exhibits attract a wide range of enthusiasts and curious humans.

    What is your background? What is it like to be part of a large art institution? Does it help or hinder your personal artmaking?

    Personally I attended film school at University of Texas and have worked professionally in the new media / emerging media space for over ten years. The Art Institute of Chicago is a fantastic institution that well deserves its reputation as the best art school in the country. Teaching there has enabled me to maintain a prolific art practice and run a national festival, while consistently being exposed to new ideas and approaches by my students and colleagues. Can you tell I like it?

    What are some of the highlights of the past five years of the festival?

    For me the best parts have been the amazing people I have met in the process. Specifically, I most treasure: Troy and Julee of (who now co-organize the festival), Jason White and the whole team at Lift Studios (who have created our Festival opener for several years in a row), and the artists and organizations (like Lumen Eclipse) that are pushing the boundaries of what is possible with Art in this century.

    In terms of activities, the curation of this broad assortment of work has always been a fun and interesting challenge. Also this year’s educational program really exploded the pedagogy of art education by incorporating aspects of professional training, academic education, and the underground arts education that is becoming very popular here in Chicago.

    Who are some artists that you’re excited about currently? Are the most exciting artists in motion graphics (or more broadly, moving images) working in the commercial world? Are they working for the silver screen, the TV screen, for other multimedia environments?

    Wow. I have no idea how to answer that. Honestly, I was very excited about working with Lumen Eclipse because I think you have built the most vibrant and cutting-edge collection of motion-picture art currently available.

    The next genre I expect to see a lot of innovation is narrative-based web content that incorporates new media design with motion-picture design.

    Where will we be in 10 years? 20 years? Is the internet going to revolutionize media beyond what it’s already done?

    The 10-20 year forecast for media and the Internet is bleak. Google will become an operating system much like AOL has tried to become years ago. Nearly all of media that people receive will be personalized for the individual viewer, and will be formatted as a hybrid of content and advertising, much like 1940’s television and the now ubiquitous product-placement in Hollywood cinema. Nearly all programming will be delivered through proprietary devices much like cell-phones and satellite television receivers.

    The FCC rules on media consolidation are and will continue to lead to massive vertical integration of media companies; where competition exists only between mediums, not within mediums. The idea of the Internet as a democratic platform where anyone can participate was to a large degree enabled by Microsoft’s monopoly, and as their market dominance wanes, so will the Internet as we know it. What we think of as the “World Wide Web” will be relegated to little more than a back alley on the information super-highway, much like Newsgroups and Archie have been, accessible only to experts and bots. These platforms will be replaced by much more conditioned virtual spaces.

    This is why the iPhone has been such a symbolically important device. It places the competition in the media market into a new constellation; pitting telephone and broadcast mediums in a single coliseum of experience design, pure verticals of content, interface, advertising and delivery.

    The best plateau artists can hope to leap to in the next ten years is custom device design (likely through open source microcontrollers) and/or private darknet delivery.

    Like I said: the forecast is bleak.


    Interview conducted by Lumen Eclipse

    more information on the Motion Graphics Festival

    Cool polyphonic audio editing technology!!!

    :: ::
    gift culture's picture

    Direct Note Access is a technology that makes the impossible possible: for the first time in audio recording history you can identify and edit individual notes within polyphonic audio material. The unique access that Melodyne affords to pitch, timing, note lengths and other parameters of melodic notes will now also be afforded to individual notes within chords. Check out the video here:

    Squish the Squid Productions - Guide to Free Mac Music Software

    :: ::
    gift culture's picture

    Here's a link to an interesting guide for free Mac Music Software:

    EQ reference

    :: ::
    gift culture's picture

    Here's a nice set of EQ references:Eq SettingsGeneral:20 Hz and below - impossible to detect, remove as it only adds unnecessary energy to the total sound, thereby most probably holding down the overall volume of the track60 Hz and below - sub bass (feel only)80(-100) Hz - feel AND hear bass100-120 Hz - the "club sound system punch" resides here200 Hz and below - bottom250 Hz - notch filter here can add thump to a kick drum150-400 Hz - boxiness200 Hz-1.5 KHz - punch, fatness, impact800 Hz-4 KHz - edge, clarity, harshness, defines timbre4500 Hz - exteremly tiring to the ears, add a slight notch here5-7 KHz - de-essing is done here4-9 KHz - brightness, presence, definition, sibilance, high frequency distortion6-15 KHz - air and presence9-15 KHz - adding will give sparkle, shimmer, bring out details - cutting will smooth out harshness and darken the mixKicks:60Hz with a Q of 1.4 -- Add fullness to kicks.5Khz with a Q of 2.8 -- Adds attack to Kicksbottom (60 - 80 Hz),slap (4 kHz)EQ>Cut below 80Hz to remove rumbleBoost between 80 -125 Hz for bassBoost between 3 - 5kHz to get the slapPROCESSING> Compression 4:1/6:1 slow attack med release.Reverb: Tight room reverb (0.1-0.2ms)General:Apply a little cut at 300Hz and some boost between 40Hz and 80Hz.Control The Attack:Apply boost or cut around 4KHz to 6KHz.Treat Muddiness:Apply cut somewhere in the 100Hz to 500Hz range.kick>> bottom depth at 60 - 80 Hz, slap attack at 2.5HzSnares:100Hz with a Q of 1.0 -- Add fullness to snare200Hz - 250Hz with a Q of 1.4 -- Adds wood to snares3Khz with a Q of 1.4 -- Adds atack to snare.7Khz with a Q of 2.8 -- Adds Sharpness to snares and percussionfatness at 120-240Hzboing at 400Hzcrispness at 5kHzsnap at 10kHzfatness (240 Hz), crispness (5 kHz)EQ> Boost above 2kHz for that crisp edgeCut at 1kHz to get rid of the sharp peakBoost at 125Hz for a full snare soundCut at 80Hz to remove rumblePROCESSING> Compression 4:1 slow attack med release.Reverb: Tight room reverb (0.1-0.2ms)snare>> fatness at 240HZ, crispness at 5 KHzVocalsGeneral:Roll off below 60Hz using a High Pass Filter. This range is unlikely to contain anything useful, so you may as well reduce the noise the track contributes to the mix.Treat Harsh Vocals:To soften vocals apply cut in a narrow bandwidth somewhere in the 2.5KHz to 4KHz range.Get An Open Sound:Apply a gentle boost above 6KHz using a shelving filter.Get Brightness, Not Harshness:Apply a gentle boost using a wide-band Bandpass Filter above 6KHz. Use the Sweep control to sweep the frequencies to get it right.Get Smoothness:Apply some cut in a narrow band in the 1KHz to 2KHz range.Bring Out The Bass:Apply some boost in a reasonably narrow band somewhere in the 200Hz to 600Hz range.Radio Vocal Effect:Apply some cut at the High Frequencies, lots of boost about 1.5KHz and lots of cut below 700Hz.Telephone Effect:Apply lots of compression pre EQ, and a little analogue distortion by turning up the input gain. Apply some cut at the High Frequencies, lots of boost about 1.5KHz and lots of cut below 700Hz.vocals>> fullness at 120 Hz, boominess at 200 - 240 Hz, presence at 5 kHz, sibilance at 7.5 - 10 kHzHats:10Khz with a Q of 1.0 -- Adds brightness to hats and cymbalsHi Hat & Cymbals: sizzle (7.5 - 10 kHz), clank (200 Hz)EQ> Boost above 5kHz for sharp sparkleCut at 1kHz to remove janglingPROCESSING> Compression use high ratio for high energy feelReverb: Looser than Bass n Snare allow the hats and especially the Rides to ring a littleGet Definition:Roll off everything below 600Hz using a High Pass Filter.Get Sizzle:Apply boost at 10KHz using a Band Pass Filter. Adjust the bandwidth to get the sound right.Treat Clangy Hats:Apply some cut between 1KHz and 4KHz.hi hats/cymbals>> clank or gong sound at 200 Hz, shimmer at 7.5 kHz - 12 kHzGuitar:Treat Unclear Vocals:Apply some cut to the guitar between 1KHz and 5KHz to bring the vocals to the front of the mix.General:Apply a little boost between 100Hz and 250Hz and again between 10KHz and 12KHz.Acoustic GuitarAdd Sparkle:Try some gentle boost at 10KHz using a Band Pass Filter with a medium bandwidth.General:Try applying some mid-range cut to the rhythm section to make vocals and other instruments more clearly heard.Other:Voice: presence (5 kHz), sibilance (7.5 - 10 kHz), boominess (200 - 240 kHz), fullness (120 Hz)Electric Guitar: fullness (240 Hz), bite (2.5 kHz), air / sizzle (8 kHz)Bass Guitar: bottom (60 - 80 Hz), attack (700 - 1000 Hz), string noise (2.5 kHz)Toms: attack (5 kHz), fullness (120 - 240 Hz)Acoustic Guitar: harshness / bite (2 kHz), boominess (120 - 200 Hz), cut (7 - 10 kHz)Bass - Compressed, EQ'd with a full bottom end and some midsrack toms>> fullness at 240 Hz, attack at 5 kHzfloor toms>> fullness at 80 - 120 Hz, attack at 5 kHzhorns>> fullness at 120 - 240 Hz, shrill at 5 - 7.5 kHzstrings>> fullness at 240 Hz, scratchiness at 7.5 - 10 kHzconga/bongo>> resonance at 200 - 240 Hz, slap at 5 kHzGeneral Frequencies:EQ Reference: Frequencies50HzBoost: To thicken up bass drums and sub-bass parts.Cut: Below this frequency on all vocal tracks. This should reduce the effect of any microphone 'pops'.70-100HzBoost: For bass lines and bass drums.Cut: For vocals.General: Be wary of boosting the bass of too many tracks. Low frequency sounds are particularly vulnerable to phase cancellation between sounds of similar frequency. This can result in a net 'cut of the bass frequencies.200-400HzBoost: To add warmth to vocals or to thicken a guitar sound.Cut: To bring more clarity to vocals or to thin cymbals and higher frequency percussion.Boost or Cut: to control the 'woody' sound of a snare.400-800HzBoost: To add warmth to toms.Boost or Cut: To control bass clarity, or to thicken or thin guitar sounds.General: In can be worthwhile applying cut to some of the instruments in the mix to bring more clarity to the bass within the overall mix.800Hz-1KHzBoost: To thicken vocal tracks. At 1 KHz apply boost to add a knock to a bass drum.1-3KHzBoost: To make a piano more aggressive. Applying boost between 1KHz and 5KHz will also make guitars and basslines more cutting.Cut: Apply cut between 2 KHz and 3KHz to smooth a harsh sounding vocal part.General: This frequency range is often used to make instruments stand out in a mix.3-6KHzBoost: For a more 'plucked' sounding bass part. Apply boost at around 6KHz to add some definition to vocal parts and distorted guitars.Cut: Apply cut at about 3KHz to remove the hard edge of piercing vocals. Apply cut between 5KHZ and 6KHz to dull down some parts in a mix.6-10KHzBoost: To sweeten vocals. The higher the frequency you boost the more 'airy/breathy' the result will be. Also boost to add definition to the sound of acoustic guitars or to add edge to synth sounds or strings or to enhance the sound of a variety of percussion sounds. For example boost this range to:Bring out cymbals.Add ring to a snare.Add edge to a bass drum.10-16KHzBoost: To make vocals more 'airy' or for crisp cymbals and percussion. Also boost this frequency to add sparkle to pads, but only if the frequency is present in the original sound, otherwise you will just be adding hiss to the recording.

    Search and Play

    :: ::
    derby's picture

    Music search and play just became simple and slick.