A few years ago, during a discussion at the EMP conference on the future of rock criticism and the chances of making a living doing so, someone suggested that music writers turn to academia, teaching classes in pop history, contemporary culture, etc. Not a bad idea, but unfortunately it looks like the musicians themselves are beating us to it. First Bun B of UGK starts teaching at Rice, and now David Lowery of Cracker and Camper Van Beethoven has taken a job teaching in the music business certificate program at the University of Georgia in Athens. Anybody remember when people went into pop music to avoid college?
Archive for the ‘media’ Category
Virginia Heffernan, who has never met a modern technology scare-mongering story she hasn’t embraced, writes in the New York Times about the latest study linking hearing loss and headphone use. Except, according to the abstract in The Journal of the American Medical Association, that’s not what the study says. It does say that hearing loss among teenagers appears to have increased, but it makes no conclusions about the cause, and, according to another review of the study, says that “Hearing loss was not associated with ear infections, use of firearms, or self-reported noise exposure” (my italics). That’s no problem for Heffernan: she simply cites “many” unnamed “researchers” who support the position she’d apparently taken before she read any of the research (one important point of the study that she doesn’t mention is that hearing loss is higher among teenagers who live below the poverty line, which would suggest other environmental causes besides headphone and earbud use). With that unverified support, she considers herself free to editorialize to her heart’s content: “…it’s amazing that the intensely engineered frankensounds that hit our eardrums when we listen to iPhones are still called music.”
Though it’s important to point out to Heffernan that those sounds are called music because they are music (you know, organized noise and all that), in some ways I understand and almost agree with her point of view. I use headphones a lot, but I find them frustrating because they limit and often destroy the spacial impact music possesses when played in the open air, an effect that no set of headphones I’ve ever heard has been able to recreate. They’re great for concentration, and for shutting out the outside world, but something is definitely lost. I still remember how revelatory it was to hear Lil Wayne’s “Lollipop” on the stereo after listening to it on headphones for a couple of months. The way stereo space was used on the record added to its weirdness in ways that the headphone experience couldn’t replicate; that was the first time I realized how great the record really was. On headphones, every record you hear may as well be in mono (a fact that some artists, such as The Black Eyed Peas, take advantage of: The E.N.D. may as well be in mono—there’s almost no stereo separation at all).
I’m not really an audiophile, but the loss of not just sound but spacial quality marks a major difference between the way recorded music sounds now and how it did in the past. I’ve even seen surveys that suggest that people who have grown up in the iPod era prefer the sharp, tinny sound of earbuds over the fuller sound you get from stereo speakers. How representative that is, I have no idea, but it could be part of the reason for the success of artists like Ke$ha, whose music is based on that sound, and the growing popularity of dubstep, which takes as much advantage of virtual space as the real thing. What I haven’t seen is any sort of survey on how much time people actually spend listening to earbuds or headphones as opposed to actual speakers (even if only on their computers). If anyone has seen such numbers, point them out to me. Contrary to what Heffernan seems to believe, I don’t think headphones are going to make us all deaf, but they may very well change the way music sounds, and how we respond to it. That’s the real story.
The problem with stories like this one in Billboard, about a study suggesting that celebrities have almost as large an “influence” as regular news sources on Twitter, suffers from one serious flaw: it never once lays out what is actually meant by “influence”. Doing a little research reveals that this is a problem with the study as a whole: “So what defines influence? ‘That’s a difficult question to answer,’ [study co-author Alok] Choudhary says. His algorithms calculate a tweeter’s influence based on the actions (retweets, direct messages and “@” responses) his tweets inspire in his network, defined as his followers and his followers’ followers.”
In other words, “influence” is roughly defined as how much of a chain reaction, in the form of retweets and other responses, you get as the result of your tweets. “Influence” is nothing more than how much energy your tweet generates, with no attempt to determine what that energy actually results in (aside from other tweet reactions) or even where it directs itself. So when you say that Adam Lambert and Conan O’Brien have an “impact” on “politics and world events”, your making nebulous judgments about things that even those who study them for a living find it impossible to define or make conclusions about. I haven’t read the study, so I have no idea whether Choudhary is making claims for this sort of knowledge, but I’d bet not. This is obviously the press jumping to conclusions, hoping to suggest that celebrities have more cultural power than they do—because how else can the celebrity and trade press justify its existence?
At any rate, it may still be worth checking out Choudhary’s site, if only for a glimpse at how strong an “influencer” (can we please ban this word now?) Indonesia seems to be on Twitter trends.
“…great art offers a necessary alternative to an over-mediated culture. Art writers should use the internet to counteract the dematerialization of a hyper-connected world, not encourage it through false promises.”
—James Panero in The New Criterion
Worth reading in full.
MTV “News” provides a brief history of Wham!’s “Last Christmas”, tracking nearly every cover version you’re likely to have heard over the last 25 years. Why? Because they just sang it on Glee, of course. Except the Glee Cast sang it last year, too, which MTV doesn’t seem to remember. Not that I blame them; I’m trying to forget it, myself. But from a pseudo-news organization you expect something more…uh…oh, never mind.
Via Ann Powers at the LA Times, Cee-Lo Green offers advice to parents who have their doubts about letting their kids hear “Fuck You”. He recommends letting them deal with it. “I wouldn’t necessarily want my children to be naïve about anything. I can either teach them how to negate or navigate. To get through it, or avoid it completely. That’s all that we can hope for them, to be able to distinguish things.” The LA Times, however, isn’t so sure: “If you are under 13 years of age you may read this message board, but you may not participate” it says at the top of the comments section. So there you go, kids, you can listen to “Fuck You”, but no singing along.
They aren’t now, obviously, and I don’t know if anyone is even considering it, but after reading the most recent study about streaming services and mobile networks, I think there’s a (pardon the word) synergy that can’t be denied. There are already a number of established and fairly successful streaming services, each with it’s own demographic, and their impact is growing by the day. Once iTunes, Google, and eMusic join the streaming throng, the sheer weight of the movement will carry itself along.
Most streaming services already have some sort of licensing agreement with the major labels, so in terms of what’s available there isn’t much difference between them. The only real distinctions are ease of use, opportunities for the audience to discover new music (even though that’s not always what the audience wants), social network capabilities, and price (though there’s not much difference there, either). The only real opportunity for distinction in the future (besides, in the case of iTunes and Google, sheer size and brand recognition), will be exclusivity. Since profit margins for streaming are so small, though, it’s doubtful that the major labels would be willing to give an exclusive deal to a service, unless there were a guarantee that the services would find exorbitant. At the moment, it makes more financial sense for the major labels to spread their product through as many different services as possible.
Imagine, though, if an already successful, independent band, such as Radiohead (which would have little to lose in such an experiment), were to sign an exclusive deal with a streaming service, even if it’s for only one album. Physical product would still be available, of course, but perhaps not until after a delay of a month or two, much the way In Rainbows was released online or many albums are now released digitally before appearing in stores. Something like this has already been going on at Rhapsody, where they will occasionally have exclusive rights to stream an album the week prior to its release. Although there will always be other ways of getting a hold of music once it’s been made available in any form, it’s hard to imagine that such a scenario wouldn’t result in a boost in the subscriber base of whatever company was lucky enough to make the deal.
From there it might be only a matter of time before services started signing bands directly, both established groups who have untangled themselves from the majors or new bands willing to work for a pittance in order to get the exposure that a service with a few million subscribers might give them. It would mean lower profits for everybody at first, but it would also mean lower expenditures, and might ultimately turn into a steady revenue stream for everyone involved (it might even make theoretical nonsense like the long tail seem feasible).
Because the established labels would undoubtedly be resistant to such a plan (possibly even to the point of killing their agreements with certain services), and also because the services themselves aren’t set up, for now, with all the things necessary to be a record label, the majority of the acts on streaming services, at least at first, would be those independent bands who are unencumbered with contracts but also self-sufficient enough to put their own material together and do their own publicity. No doubt in the early days, the services would be willing to give these bands far more freedom than they could get from the majors, or even from some independent labels. In other words, while having almost complete artistic freedom, they would also be immediately connected to a distribution network that would give them instant access, plus essentially free promotion, to a potential audience of millions.
It’s possible—and excuse me if this sounds like either wishful thinking or sheer fantasy—that the result would be, for a brief time, a kind of golden age, where bands would find themselves given both a freedom and access to audiences they’ve never enjoyed before, and that even the audience would tune in in ways they haven’t for years. It will all fall apart in the end, of course. Once real money is being made, corporate conglomeration will set in, the audience will fragment (since it often seems that the more monolithic the source of access is, the more fragmented the audience becomes), and the whole thing will fall apart again, only to be reformed as something else altogether. I’m not making predictions, just suggesting one possible scenario. Stranger things have happened.
Two tweets in a row from Pitchfork make almost the exact same comment about two different albums.
Andrew Gaerig on A Sunny Day in Glasgow’s latest, Autumn, Again: “more concise and less wily than its predecessor”
Tom Breihan on Bay Area garage-poppers the Fresh & Onlys’ latest, Play It Strange: “more focused, easier to digest”
It’s a movement!
Excuse me while I go off topic for a moment.
Roger Ebert has been posting a series of sarcastic tweets in which he refers to various volumes in his book collection as “e-books”. “Aww. My dog Ming chewed the spine of my e-book edition of ‘The Children of Sanchez.’” “Studs Terkel left me his autographed Royko e-book, and you can see here where he must have dropped his cigar.” “In his e-book edition of “The Grapes of Wrath,” I found a check my father never cashed.” And so on. The point is obvious, and I understand what he’s getting at, but I also think the argument is meaningless.
Though this isn’t true of every argument I’ve heard against e-readers, the majority still revolve around the same basic idea of the experience of reading as something physical as opposed to intellectual. E-book critics go on about the cold feel of plastic as opposed to the warmth of paper, the smell of books, their heft, their volume, their typeface and design, and they usually end by conjuring up some fuzzy, sentimental scene that involves sitting in front of a fire in a cozy armchair, a cat on their laps and a dog at their feet, reading some classic work and basking in the glow of LITERATURE printed on paper and bound in leather. It’s the intellectual equivalent of a Thomas Kinkade painting.
Not that any of those are bad things. I grew up on books like anybody else. I love the way books look, the way they feel, the way they smell. I, too, love curling up in a comfy chair in front of a fire with a good book, though our cats are too big to sit in my lap for long, and we don’t have a dog. I love all those things. But there’s something I love more: words. Words and ideas and thoughts and stories and essays and novels and plays and poems, and all the other things that can be made out of words. A comfy chair and a fire is nice, but I don’t need them, and sometimes they’re even a distraction.
I fully understand the sentimental value of books, and I have many that I would never consider selling though I know I’ll never read them again. And though I appreciate Ebert’s point that his library contains mementos and memories that wouldn’t exist if he had grown up in a world of e-readers, does he honestly believe they wouldn’t be replaced by other sentimental markers? Does he own a print of every movie he’s ever seen, so he can go through his collection of film cans or videotapes and remember when he saw that movie with Gene Siskel, or remember what movie he was at when he got his first kiss? That’s what memories are for. Does he really need to find an uncashed check in a copy of “The Grapes of Wrath” to remember his father?
I don’t mean to step on Ebert’s memories, which are sweet and often funny, but why should they be used to launch an attack on e-readers when they have nothing to do with the purpose for which books were invented, the same purpose for which e-readers were invented, the transmission of information? That phrase sounds cold, but we all know that once we actually begin reading, it isn’t. If the words are good enough, if the information being transmitted is interesting enough, you won’t notice the source, even while you hold it in your hand. Isn’t that the point? Isn’t that what we’re all reading for, to be taken away from the mundane world of paper and ink, of metal and plastic, to be transported out of our armchairs and classrooms and bus and train and airline seats into another world? Why should we care how the words reach us as long as they reach into us?
As if the promise of an all-Madonna episode weren’t bad enough: “‘Glee’ Cast Will Take On Lady Gaga’s ‘Bad Romance,’ ‘Poker Face’“