Archive for the ‘diy’ Category

a standing workstation

I’ve never been fond of extended sitting around – I’m just not fond, and add a desk to the mix and I’m all just NOPE. But my digital audio workstation is at a desk. So I decided that was dumb, and I’d like a standing workstation, but those cost hundreds to thousands of dollars,and tried a standing configuration with my monitors at maximum height, using a music stand as a keyboard holder.

Since that worked, I decided to make a better keyboard holder, one that would also hold my trackball.


Adjustable!

It attaches to any stand that will take a standard mic clip. This was of course intentional. It’s 3/4″ thickwall PVC pipe, filed out on the inside to make the inner diameter wide enough to slip over the microphone pole of a standard mic stand. It doesn’t screw on, it just fits on, so don’t file it too much or it’ll get wobbly. The fit should be snug.

The top board is just some leftover plywood I had lying about, tinted with some leftover stain and polyurethane. Completely unnecessary, but looks nice. The board is held to the PVC frame with plumbing securements and brass bolts. Don’t use wood screws; quarter-inch ply doesn’t give you enough of an anchor for that.

Also, there’s a layer of double-sided tape between the metal securement hoops and the PVC end caps. If the fit wasn’t tight, that wouldn’t work – but it is, so it works well.

The end caps are important. You need them so that the T in the middle of the PVC support and the ends of the PVC support present the same frame diameter to the attachment system. If you didn’t do that, either the board or the PVC pipes would bend a little once you bolted everything down. This way it’s consistent and flat.

I think it came out as an attractive bit of kit. The screws aren’t flush, but the keyboard has feet and those are thicker than the screw heads, so it works out. I kind of expected the screw heads to sink in a little, but they didn’t; you can always drill a little bit into the wood with a bit the size of the screw head to flatten it a little bit further, if you need to. But that’s tricky with 1/4″ ply, since it’s so thin.

Since the mic stand’s telescoping pole reaches the top of the T inside the PVC frame, you can raise and lower the tabletop just like you would a microphone, so it’s adjustable to the height you like – at least, within limits.

The PVC pipe is also the right exterior diameter for a mic clip! If your mic stand is stable enough, you can totally do this, too, which lets you raise or lower the table like a boom mic. My stands aren’t awesome enough to be stable doing that for a heavy thing like a keyboard and trackball, but I could use this for other, lighter items if I wanted. The hard part is getting the clip not to rotate left and right – the clamping bolts on my mic stands don’t clamp firmly enough. If yours do, then great!

Very quick build, about an hour except for the staining and polyurethane, but that’s optional. This is quarter-inch ply, and that seems plenty strong enough for this purpose. It’s 65cm wide and 26cm deep, which was about as small as I could get and fit the keyboard and trackball.

What I’d really like is something I could move around just a little, kind of like a mobile rack for the monitors and keyboard, but that appears to be crazymoney. This seems like a reasonable middle ground that cost me, um… two disposable brushes plus stuff I already had on hand. ^_^ So far, I’m getting more work done since this indulges my dislike of chair and desk. We’ll see if that holds out over time.

you have been warned

I found these on Tumblr and edited them up to print resolution. 😀


Shatterdome Maintenance Level

The studio is actually on one of the upper levels of the Lair. The last thing you want to do is lose satellite signal when you’re aiming energy bolts at your enemies. It’s just sloppy. Besides, if we’re going to be launching jaegers, who wants to wait in elevators?


Particularly when we drop the bass

Extreme crush hazard. Extreme.

possibly some intel wtfery

UPDATED: See below.

Okay, so the latest: we’re pretty sure this is not actually xorg now. We’re back to session saves. Not I/O in general: specifically session saves, which is to say, saving the entire project.

See, the every two-minutes thing turned out to be a new feature in Ardour I hadn’t noticed: scheduled auto-saves, which turned out to be… every two minutes. Saves also happen whenever you enable master record, which is the other time I see it. So we’re pretty damn sure it’s Save Session.

We know it’s not I/O in general. Recording is actually far more I/O intensive, and once record is enabled and the save process is done, you can record all you want to without any problems. Bouncing existing material is also a complete nonissue.

It’s also not a filesystem issue: it happens even with RAMdisk, which is faster than anything else. And the behaviour reproduces itself perfectly on my non-USB on-motherboard Intel HD Audio card, so it’s not USB.

Now, to get into more details, I’ve gone digging deep into Ardour source code. BUT I HAVE AN IDEA, so bear with me.

In the source code, most of save happens in libs/ardour/session_state.cc

Save works fine when plugins are deactivated but triggers XRUNs – which means buffer overflows due to more than 100% digital signal processing capability (DSP) is available – when plugins are active.

That’s any kind of plugin, and it doesn’t seem to matter how few.

Save session calls a lot of things including get_state(), which in turn gets latency data from plugins via (eventually) latency_compute_run(), the code for which is the same! in both lv2 and ladspa plugin interfaces.

latency_compute_run() calculates the latency by actually running the plugin. Not a copy: it runs in place the actual plugin that’s in use.

This is all in here:
libs/ardour/lv2_plugin.cc
libs/ardour/ladspa_plugin.cc

latency_compute_run() activates the plugin even if it’s already activated (!) then deactivates it on exit (which I guess is stacked somehow because they don’t deactivate in Ardour itself) and runs a second thread on the same instance of the plugin. (Presumably, because how else I guess?)

This strikes me as a minefield.

And so, an hypothesis: this is causing the hyperthreading predictive Intel cpu I have to retrace because of bad prediction and/or bad hyperthreading.

Penalty for this in Intel land is large, and I have seen commentary to the effect that it is large in the Intel Core series I have. I suspect that the two versions of the active plugin may be continually invalidating each other(!) for the duration of the latency test. It may even be causing the on-chip cache to be thrown out.

This would explain why it stops being an issue when the plugin is not active.

Thoughts?

ETA: Brent over on Facebook pointed me at this 5-year-old bug, which led me to try fencing Ardour off to a single CPU. And when I do that… the problem goes away. Now, this sounds terrible, but I’m finding even with my semi-pathological test project (which I built to repro this problem) I can get down to 23-ish ms latency with a good degree of safety. So clearly, no matter what’s happening, it does. not. like. multicore.

That said, with hardware monitoring (which I have) that’s plenty good enough. I could live with 60ms if I knew it was safe. 23ms being safe (and 11.7 being mostly ok but a little iffy)? Awesome. Still: what is this?

ETA2: las, who wrote most of and manages the plugin code, popped on and said what I described would totally happen … except the latency recalculation doesn’t actually get called during save. I appear to have just misread the code, which is easy to do when all you have is grep and vi and an unfamiliar codebase.

ETA3: Well, hey! Turns out that setting Input Device and Output Device separately to the same device directly instead of setting Interface to the device (and leaving input and output devices to default assignment) means that Jack loads the device handler twice, as two instances – once for input, once for output. Thanks to rgareus on Ardour Chat for that pointer.

I can see how they get there, but there really ought to be a warning dialogue if you do that.

That means on a single-processor I can get down to 5.6ms latency and past my pathological repro tests cleanly. This is the kind of performance I’ve been expecting out of this box – at a minimum. Attained. I could in theory not even hardware monitor at these speeds – tho’ you really want to be down around 3ms for that ideally. (I can actually kinda run at 2.8ms – but it’s dodgy.) Since I have hardware monitoring I’m setting it all the way up to 11.6ms just to keep DSP numbers down. But any way you look at it – this is awesome.

I was really hoping to get this system back to usability before heading off, and – success! Thanks to everybody who threw out ideas, even if they didn’t work, because at least there are things we get to rule out when that happens.

Also, I’ve started putting together a dev envrironment (with help from Tom – thanks!) so I can explore this further when I get back into town. Saves shouldn’t be doing this. It’d be one thing were it just to HD and not to ramdisk, that’d be fine. But to ramdisk? No. Just… no. And the processor core thing, and the plugins-active-vs-not things are just odd. Maybe I can find it.

linux filesystem performance help

NEW READERS: IT’S NOT ABOUT THE FILESYSTEM ANYMORE BUT IT’S STILL BROKEN: SEE UPDATES AT BOTTOM OF POST. Addressing filesystem performance only partly fixed it. Thanks!

Since always, I’ve had latency issues on my digital audio workstation, which is running Ubuntu Linux (currently 12.04 LTS) against a Gigabyte motherboard with 4G of RAM and a suitably symmetric four-core processor. CPUs run 20%-ish in use most of the time (and all the time for these purposes), and I never have to swap.

In this configuration, I should be able to get down to around 7ms of buffer time and not get XRUNs (data loss due to buffer overrun) in my audio chain. 14ms if I want to be safe.

In reality, I can’t make it reliably at 74ms, and that has hitches I just have to live with. To get no XRUNs or close to it I have to go up to like 260ms, which is insane. I even tried getting a dedicated root-device USB card – I’ve long assumed it was some sort of USB issue. But no.

With some new tools (latencytop in particular) I have found it. It’s the file system. Specifically, it’s in the ext3’s internal transaction logging. To wit:

EXT3: committing transaction     302.9ms
log_wait_commit                  120.3ms

If I turn off read-time updating, which I tried last night, I get rid of 90% of the XRUNs, because the file system does about 90% less transaction logging to update all those inodes with new times.

But any attempt to write – well, you can guess. Even the pure realtime kernel doesn’t help; I compiled and installed a custom build of one today, but apparently this is still atomic: I get exactly the same behaviour. I may be able to live with that to some degree, because it’s a start-and-stop-of-writes thing, and as long as it doesn’t trigger during writes, I can get by.

But it’s bullshit, and it pisses me off.

I’m currently in progress of updating ext3 to ext4. I’d like to think that would solve it, given ext4’s dramatically better performance, but I have no such assurances at this point. I genuinely thought the realtime kernel might do it.

DO YOU HAVE ANYTHING YOU CAN TELL ME, DEAR INTERNETS? Particularly about filesystem tuning. Because this shouldn’t be happening; it just shouldn’t. Honestly, three tenths of a second to commit a transaction? I’ve been places where that kind of number was reasonable; it was called 1983, and I don’t live there anymore.

Anybody?

THINGS IT IS NOT:

  • Shared interrupt
  • This particular hard drive (the previous drive did it too; this one is faster)
  • ondemand CPU scheduling (i’m running in performance)
  • this particular USB port or a USB hub or extension cord or any of the sort
  • bluetooth or other random services (including search)
  • Corrupt HD
  • Old technology (it’s SATA; the drive is like six months old)
  • lack of RT kernel. I built this RT kernel today.
  • Going to be solved by installing a different operating system. Please don’t.

ETA: I got the ext3 filesystem upgraded to ext4, which made all those above numbers get dramatically smaller, but no further XRUN improvement. So I then disabled journaling, a configuration which outperforms raw ext2 in benchmarks I saw, and the machine is screamingly fast despite the RT kernel…

…and it hasn’t made one goddamn whit of difference in the remaining XRUNs. WTF, computer? WTF.

ETA2 (23:51 18 August): Okay, while screwing with the filesystem did solve many XRUN problems, there are still other XRUNs which are apparently unrelated, most notably, the master-record-enable XRUN. Even moving the project to a tmpfs RAM disk and running from there produced identical results, so I’m concluding this is an entirely separate problem.

I’ve already done pretty much everything there is to do the LinuxMusicians configuration consultation page and my setup actually passes their evaluation script. I should be golden, but I’m not. Help?

ETA3 (0:26 19 August): Every two minutes, right now, with the system mostly idle, I’m getting a burst of XRUNs. On an idle machine. But it is exactly every two minutes. And while Ardour remains on top of Top even when idle (at 10% of CPU and 13.5% of RAM), Xorg pops up just underneath it, and its CPU use spikes.

What does Xorg do every two minutes? Anybody? Seriously I have no idea.

ETA4 (13:19 19 August): ARDOUR 3 TRIGGERS SESSION SAVE EVERY TWO MINUTES BY DEFAULT. Disabling that STOPS the two-minute failures entirely. We’re back to file system adventures. Holy hell. THIS HAPPENS EVEN ON RAMDISK so it’s not filesystem or media specific. What the hell is going on here?

grabbing attention

Do you read in two quick F-shaped scans? That eyescan study says most of you do. It’s an important question if you’re trying to gain notice on the web – which, as a musician, I of course am. I have two lines, maybe one phrase each, to grab people passing by, before they’re done and out.

Fancy formatting doesn’t help; you’ve learned to think that means ads. Honestly, I think that’s positive adaption, even if it leads to amusing results like 86% of test subjects being unable find the US population on the US Census’s web page, despite the fact that it was bright red and the largest text on the page.

Almost everybody threw it away as an ad, because, frankly, it looks like one.

Two months ago, I rebooted this website. I cleaned it up, simplified some pages, improved organisation, added post collections – lots of starch in the collar. Plays are up, hits are up, revisits are up – all those good things.

But I have enough data now to see that there are two audiences here. You? You’re one of them. You pop in, read an article, and you’re done – particularly if reading on an echo. Some of you use the players on the left; some of you read more posts. A small but cool percentage of you browse collected articles. That’s awesome. Go you!

The other audience will never see this post. They’re like dark matter; there, and massive, but invisible.

In two months, hundreds of people have visited the front page of this website. They play music – primary reboot goal attained! – they look at videos, glance at reviews and press pages, and once in a while hit the contact form. They explore more pages per visit than you do.

And they never come over here. Ever. Unless Google is lying to me, not once in two months has even one of these visitors clicked on “Blog of Evil” in the navigation bar. Not even once.

It’s an astounding result, really. I’d like to get them over here, too; get them engaged.

I don’t know how, yet. I’ve made one small change to the front page of the site, tonight – I’ve changed ‘Latest Schemes from the Blog of Evil’ to read ‘This News Just In from Supervillain Central,’ and linked it to the blog front page. Given the special-text-gets-ignored result in the second study above, I’ve also dimmed it from bright yellow to slightly-less-bright and slightly-more-greenish yellow, to blend in a little more. It’ll take a while to collect enough data to know whether it matters, but the theory is sound.

Maybe I need to change it to “news” or something boring like that. Gods, I hope not. (eta: After some feedback on Livejournal, I realised that whether I like it or not, people weren’t hitting the Blog of Evil link. Let’s try “Blog.” Also “Home” instead of “Story.” I mean, one of the bullet points in the article is Clever phrasing drives away clicks, just as effectively as ad-like text.)

Meanwhile, if you’re in this audience, if you’re here off a search, or a trackback, or you’re just new, I’d like to get you engaged in the other direction.

In some ways, you’re a bigger challenge. Most new posts are read on echos – Tumblr, Livejournal, Dreamwidth, via RSS, and so on. But collections and semi-viral articles like Power and Supervillainy have large numbers of readers on the band site itself. Those people – you – you’re difficult to keep. And while I’m thrilled you – whoever you might be, reading this, in the future – you like my writing enough to get down this far… my art is the music.

That’s the goal.

i know what it means
to work hard on machines
it’s a labour of love
so please don’t ask me why

kitting out cheap handout

Thank you so much, everyone who stayed for my end-of-comicon panel on building a recording kit on the cheap! I’ll post about the convention in general tomorrow, but as promised, a PDF copy of the handout is right here (click to download).

Also, the series of blog post articles I talked about – which go into considerably more depth than I could in the presentation – can all be found here. That link takes you to a master post which collects all the articles into one convenient place. It’s also linked to from the blog itself, in the left column, below the RSS feed and podcast links, so bookmark that if you like for later reference.

More tomorrow – for now, time to unpack!

what is making it

Hello, The Future! appeared with Glen Raphael on Geeky and Genki, talking about working the geekmusic scene – or, at least, one of them. Obviously, this ties right in to my whole series of posts on music in the post-scarcity environment, and covers a lot of the same ground, but in convenient podcast form.

A couple of the comments she and Glen made were kind of interesting and even vaguely surprising to me. First, she’s in the no-backing-tracks-live camp. I think that’s probably true for her section of the geekosphere, absolutely. But at the same time, I look at chiptunes bands, nerdcore artists, occasional geekrock people, quite overtly using the backing tracks – typically from an iPod or laptop – and wonder whether some of that won’t make its way over. It’s something I’ve explored but haven’t tried yet.

There’s material I just can’t do solo that I’d really like to do solo – if I used my phone or laptop or something. I guess for me the differential is faking it; if you’re up there with your zouk or guitar or whatever and actually playing and singing it, and not pretending to do so, is there an actual problem with an effects track or extra-instruments track? I go back and forth on it myself.

Two other takeaways, for me. First, that the “friendship buy” is also known as the never-going-to-listen-to-it buy. And that’s still very nice of them, and supportive, I think, but it doesn’t build a fanbase because they aren’t going to listen and then tell other people. I’ve seen this expressed before, but I just love that terminology.

Also, and I’ve worried about this: both Nicole and Glen asserted in strong terms that putting as much as you can out there doesn’t hurt you, even if some of it isn’t, in the end, very good. Even if some of it is kinda bad. You can talk about contaminating the potential fanbase, but what they both point out is that the listeners will do the sieving for you, so it’s better to have more production and less filtering on the artist side.

This is the total opposite of the photography scene, and the fine arts scene, which I suspect has to do with relative sizes of potential audience. But I’m just speculating.

Anyway, it’s a good overview, and they talk about lots of things other than I am in this post. It all applies to any creative endeavour, so give it a listen if you’re trying to get your work out there.

on players and websites

It’s been about a month since I rebooted the website, and I wanted to talk about early results! First, hi all you new people! Thank you for coming by and I hope you like it here. ^_^ This is a DIY post; I try to do them on a regular basis, usually on Wednesdays.

So! A recap of what I did to the website. It wasn’t a major redesign; it was more a reimplementation – and better implementation – of the original idea. I overhauled the blog to look like the rest of the site; I put in my own videos page instead of linking off to my YouTube channel; I did a lot of general cleanup and fine-tuning.

I also threw in some collections of themed posts (the studio buildout series, the travel case construction series, and music in the post-scarcity environment), and added links to them, and started linking the Podcast page in a bit.

Finally, I simplified the hell out of the front page, throwing out lots of crap. I’d fallen into the throw-a-little-bit-of-everything-at-the-front-page trap; it’s awfully, awfully tempting to do.

All the refreshes/re-implementations followed the principle of each page having a primary goal (gets the most space), a secondary goal, and, optionally, tertiary goal.

The big goal on the front page refresh was to get more plays on in-site players. The big goal on the blog was to get more views – and particularly depth of views – with a secondary goal of getting some plays.

Here’s what the reboot has done for my music plays via embedded players on the website. Each dot is a month; the most recent dot is not yet an entire month:

Embedded plays this month are more than the entire previous year combined.

Now, some of that is going to be cannibalisation of plays from the bandcamp-hosted “music” page; those aren’t counting as embedded. Let’s look at total plays:

Plays this month are about equal to plays of the last two and a half months combined.

That’s rather dramatic, isn’t it?

Now, I have had a bit of a traffic spike this month, mostly related to the SFWA debacle. But I’ve had those before – actually, larger ones – and I’ve done the math for comparison.

In this spike, people played 17.3 times as many tracks per 100 page visits as in the last major traffic spike, despite having the same players on the blog, just in a different and apparently less clear place.

I would say that while this is early, the preliminary results here are very promising. Some of it is a result of newness, but hopefully not all.

Now, about depth of views. That’s much less dramatic and a little less clear.

A lot more random people are finding the blog on searches; those collection-posts are search-engine magnets. That’s led to a climb in the ‘bounce’ rate, where people hit one page, go nope, and bounce off.

Subtracting out the SFWA bounce, pageviews are up about 158%, at 258% of the month before. As mentioned, bounce rate has climbed by 10%, rather than dropping as I’d hoped; but at the same time, the amount of time spent on per page by viewers has climbed (only by about 3%, but that includes those bounces), and the pages per visitor appears to have climbed by about 17% – a healthy increase.

So, less clear, particularly with the rising bounce rate, but still elements of promise.

The biggest surprise, by far, though, has to be discovering that trackbacks still matter. I didn’t get a big SFWA bounce by writing about SFWA’s sexism and fails; I got a big SFWA bounce by writing about SFWA’s sexism and fails and linking to other blogs which support trackbacks so people could find me.

I had no idea people followed trackbacks. But they do. Sometimes, in flocks. HI!

Anyway, to sum up: I think the three-goals approach is so far proving effective. We’ll have to see how it stands up over the next few months, of course, but it’s off to a good start. Consider it when designing your own website.

As for further goals: I’d like to see more comments on the band blog home proper; most comments are usually made on the echo which is cross-posted to Livejournal, with Dreamwidth also regularly seeing comment traffic, and some at Tumblr and Facebook. The advantages of echos outweigh the lack of centralised comments, at least for now, but I really wish there was a way to copy them over to here. That’d be awfully nice.

atTENTtion get it ar ar ar

I’ve never used a tent on tour which is kind of unusual amongst the people I know in music, but for these Leannan Sidhe gigs I need it. SHITTY CELL PHONE PICS, AHOY!


jfc this is a big tent


i really do not remember this tent being so big

This is actually one of TWO tents I own. The other is older because I bought it used and it’s five people, not four like this one, but I think it’s about the same size actually. But MUCH harder to set up.


fred wants to know wtf i am doing with a tent that big


honestly i have had dorm rooms smaller than this tent

I timed taking it down, which is going to be the part that needs to be done most quickly because of schedules: 14 minutes from fully set up (which it wasn’t in these pictures, a kind of rain cowl goes over the top) to fully packed in the single carry bag. I’m going to do it again later for practice because I haven’t used it since I don’t even know. 2005?

ADVENTURE!

audiobook giveaway

Anna is giving away both ebook and audiobook copies of Valor of the Healer. This isn’t the universe the soundtrack is from, but it is the same writer, and you should go enter!

The soundtrack has been dragging on, and how long it’s been taking has been getting on my nerves in a serious way. There’s not all that much to be done about that; I wasn’t ever intending to play melody on the traditional Irish Tune portions, then had to, which means I had to learn tunes playing well enough to do it in studio, and then when some of the selections weren’t going to work in a traditional set type arrangement, meant I had to learn to write tunes, or portions thereof, in a way that sounded right.

We are moving along, tho’. Slowly. We had Ellen Eades in last week, recording hammer dulcimer for one set, and Sunnie Larsen will be in tomorrow, recording more fiddle. I’m desperately hoping that between having to deal with the remains of Sewer Implosion 2013 and rehearsal tonight with Leannan Sidhe for their six shows over on the dry side that I’ll be able to rebuild the Chapter 1 track project, which corrupted itself after a crash.

Don’t worry, we didn’t lose any data, it’s just… jumbled a bit. So I have to import everything into a new project. It’s not difficult, just incredibly annoying and a bit time-consuming.

But that’ll depend upon letting the CD labeller get finished with the short run of CDs that are being printed up for those aforementioned shows. See how everything stacks up and gets in the way of everything else? So frustrating.

Whup, sounds like a certain wallboarder has had to get out a larger saw. I’d best check what’s up.

Return top

The Music

THE NEW SINGLE