Archive for August, 2013

off to pax

Off to PAX! If there’s a proper Elder Scrolls Online demo, I’ll be queueing for that. I’m interested in Bit Brigade on Friday night, and The Doubleclicks on Saturday. Today is a big Expo Hall day for me in general, tho’.

Say hi if you see me!

finally doing some music again

Finally – finally – back to recording and doing some music; Sunnie’s in today for what should be a pretty short session. Also, I did some comping work last night, after the latest round of carpet cleaning. (I’m about to admit surrender and call professionals in early on the sealed-off areas of the Lair. And I thought supervillainy was evil!)

I’m going to PAX this weekend. I have unpleasantly mixed feelings about that, partly because of Gabe’s assholishness, and partly because of the whole long-term ramp-up in misogyny in gamers, which really pisses me off. But the way I look at it is this:

I was here first, you fuckers.

And I’m not yielding this ground. Not nearly that easily. So I’m going, because of All That, because Fuck You, No, I Was Here First, and I have the artefacts to prove it.

And hopefully I can forget all that and have a good time, because I’m sick of everything having to be a political act of defiance.

Well, that escalated quickly. I’m planning some shows, too, btw, but can’t talk about them yet. How’s your Wednesday?

small scale botnet attack

We came under DDOS attack by a mostly-south-Asian botnet today; around 3,000 unique IPs, which is not that large on the scale of things. Probably an advanced hobbyist rather than a professional, given that we could sort it out and get back online as quickly as we did.

Furthest west IP was in Poland; furthest east, China. Southernmost probably Thailand.

Anyway, if you were trying to reach this morning’s blog post, that’s what happened; give it another go. We should be fine now. If you can’t anything on the website and are reading this via an echo, let me know; I probably blocked you accidentally too.

superdeer valium

Who has the job of coming up with random fake brand names for generic batteries and similar cheap miscellany? Who has this job? Because somebody has this job. I want to know who.


Powermax Superdeer. Really? Superdeer?

On a similar note, please enjoy Valium lightbulbs. The best part is how they’re marked 110v on one side of the box, and 130v on the top. But the valium lets us not worry about that sort of thing.

Now, even sedated, I have no idea how you even make lightbulbs out of diazepam, but clearly, Uncle Fester has been ahead of the rest of us.

Maybe it was his idea. Maybe he’s the one who…

…oh god. What I have I discovered?!

Forget the light bulbs. The light bulbs are unknowable. The light bulbs have never been known. You cannot have known about the light bulbs because no one has ever known about the light bulbs. We do not need them. We have the lights above above the Arby’s, the ones that rush by as we pretend to sleep, and that is all that we need to know, because…


Yes. Yes, we do.

you have been warned

I found these on Tumblr and edited them up to print resolution. 😀


Shatterdome Maintenance Level

The studio is actually on one of the upper levels of the Lair. The last thing you want to do is lose satellite signal when you’re aiming energy bolts at your enemies. It’s just sloppy. Besides, if we’re going to be launching jaegers, who wants to wait in elevators?


Particularly when we drop the bass

Extreme crush hazard. Extreme.

and back again

Sorry for the intense lag bout there – after getting back from Vancouver, Friday got swallowed whole by more cat problems and a mountain of paperwork. The weekend… yeah, mostly more cat pee cleanup. I tried to put my studio back together on Saturday, but had to take it apart again.

But I think I’ve found the last hidden spot in my studio which has been driving me insane for a week*, at last, tonight.

BUT. ENOUGH ABOUT CAT PEE. Vancouver and the Great Big Sea show at the PNE was awesome. Geri and Robert let us crash at their place again – thanks! The Fair, by the way, has this:


Oh Jesus

…which is something I’m sure nobody needed to know. I did not partake, but I almost did. Instead, I bought alcohol and Siegel’s, which are now safely stashed in our pantry.

Of course, the show:


Setting the Stage
(More photos on my Flickr stream.)

GBS was good, as always, but the experience – I hate to say this, but I knew it would be true – paled against seeing them last year on their home turf. The crowd was into it, don’t get me wrong, but not on the same order of magnitude.

The funniest moment in the show was when Alan realised one of the giant mobile sculpture-like metal towers behind the audience (Revelation) was actually a ride and there “are people up there,” at which point he was all, “…what did you do?!

He looked genuinely freaked out. It was hilarious.

Thursday was the last day of Mt. Pleasant/East Van icon Rhizome, which is a sad and terrible thing – they’re moving to Toronto, partly for family reasons, and re-opening there. DAMN YOU, EASTERN STANDARD TRIBE!

I stopped by for a last lunch and to hang out a bit, and ended up putting them back in contact with another person who moved from Vancouver to Toronto, and took a few pictures; this one is fitting:


Accidental Sepiatone

Accidental sepiatone actually was accidental, a mis-exposure and happy accident. I have a regular-colour shot too, but I like this one better. Anna and I also helped Robert move his old rear-projection TV to the recycle centre; a relaxed and lazy day before catching the train south.

Once back, at King Street Station, we walked by this beautiful beast:


Private Car

I asked the station employee in the photo what the deal was, and she told me it was a private car. I know about these, but hadn’t yet seen one; essentially, you can buy an old rail car and pay rail companies to haul them around for you – with cargo trains too, this isn’t just CascadiaRail or Amtrak – with you and yours in them. It’s like an RV, but for the rails.

I want one so much. 😀

So, that’s a bit of catching up. Today we’ll have another go at putting the studio back together sans remaining stink, and try to get back on proper track.


*: Honestly, you have no idea how disruptive this has been. He started while we were gone, and it went from never to 2-4 times a day, and there’s not a day I’m not spending at least a couple of hours cleaning cat pee or aftereffects thereof, just trying to catch up. It’s all I did for a solid week; at least it’s not all my time anymore. On top of that, everything is stacked up in fenced-off areas he can’t reach; it’s like we’re in the middle of packing for a move, and that includes my studio. I’m having anxiety nightmares. We have an action plan now, but… yeah.

awfully pretty

I have to admit, it’s a tad breathtaking, in a good way, to get home on the train and walk into this station, after so many years of the horrible 60s “modernisation” and a decade of restoration.

It’s one hell of a first impression, I have to tell you.

Back from Vancouver. Bed now. More later. ^_^

talking of code

Courtesy Criacow, this… this is beautiful.

My contribution, were I to make one, would possibly be:

#define fork sleep(1);fork

For once, do read the comments.

possibly some intel wtfery

UPDATED: See below.

Okay, so the latest: we’re pretty sure this is not actually xorg now. We’re back to session saves. Not I/O in general: specifically session saves, which is to say, saving the entire project.

See, the every two-minutes thing turned out to be a new feature in Ardour I hadn’t noticed: scheduled auto-saves, which turned out to be… every two minutes. Saves also happen whenever you enable master record, which is the other time I see it. So we’re pretty damn sure it’s Save Session.

We know it’s not I/O in general. Recording is actually far more I/O intensive, and once record is enabled and the save process is done, you can record all you want to without any problems. Bouncing existing material is also a complete nonissue.

It’s also not a filesystem issue: it happens even with RAMdisk, which is faster than anything else. And the behaviour reproduces itself perfectly on my non-USB on-motherboard Intel HD Audio card, so it’s not USB.

Now, to get into more details, I’ve gone digging deep into Ardour source code. BUT I HAVE AN IDEA, so bear with me.

In the source code, most of save happens in libs/ardour/session_state.cc

Save works fine when plugins are deactivated but triggers XRUNs – which means buffer overflows due to more than 100% digital signal processing capability (DSP) is available – when plugins are active.

That’s any kind of plugin, and it doesn’t seem to matter how few.

Save session calls a lot of things including get_state(), which in turn gets latency data from plugins via (eventually) latency_compute_run(), the code for which is the same! in both lv2 and ladspa plugin interfaces.

latency_compute_run() calculates the latency by actually running the plugin. Not a copy: it runs in place the actual plugin that’s in use.

This is all in here:
libs/ardour/lv2_plugin.cc
libs/ardour/ladspa_plugin.cc

latency_compute_run() activates the plugin even if it’s already activated (!) then deactivates it on exit (which I guess is stacked somehow because they don’t deactivate in Ardour itself) and runs a second thread on the same instance of the plugin. (Presumably, because how else I guess?)

This strikes me as a minefield.

And so, an hypothesis: this is causing the hyperthreading predictive Intel cpu I have to retrace because of bad prediction and/or bad hyperthreading.

Penalty for this in Intel land is large, and I have seen commentary to the effect that it is large in the Intel Core series I have. I suspect that the two versions of the active plugin may be continually invalidating each other(!) for the duration of the latency test. It may even be causing the on-chip cache to be thrown out.

This would explain why it stops being an issue when the plugin is not active.

Thoughts?

ETA: Brent over on Facebook pointed me at this 5-year-old bug, which led me to try fencing Ardour off to a single CPU. And when I do that… the problem goes away. Now, this sounds terrible, but I’m finding even with my semi-pathological test project (which I built to repro this problem) I can get down to 23-ish ms latency with a good degree of safety. So clearly, no matter what’s happening, it does. not. like. multicore.

That said, with hardware monitoring (which I have) that’s plenty good enough. I could live with 60ms if I knew it was safe. 23ms being safe (and 11.7 being mostly ok but a little iffy)? Awesome. Still: what is this?

ETA2: las, who wrote most of and manages the plugin code, popped on and said what I described would totally happen … except the latency recalculation doesn’t actually get called during save. I appear to have just misread the code, which is easy to do when all you have is grep and vi and an unfamiliar codebase.

ETA3: Well, hey! Turns out that setting Input Device and Output Device separately to the same device directly instead of setting Interface to the device (and leaving input and output devices to default assignment) means that Jack loads the device handler twice, as two instances – once for input, once for output. Thanks to rgareus on Ardour Chat for that pointer.

I can see how they get there, but there really ought to be a warning dialogue if you do that.

That means on a single-processor I can get down to 5.6ms latency and past my pathological repro tests cleanly. This is the kind of performance I’ve been expecting out of this box – at a minimum. Attained. I could in theory not even hardware monitor at these speeds – tho’ you really want to be down around 3ms for that ideally. (I can actually kinda run at 2.8ms – but it’s dodgy.) Since I have hardware monitoring I’m setting it all the way up to 11.6ms just to keep DSP numbers down. But any way you look at it – this is awesome.

I was really hoping to get this system back to usability before heading off, and – success! Thanks to everybody who threw out ideas, even if they didn’t work, because at least there are things we get to rule out when that happens.

Also, I’ve started putting together a dev envrironment (with help from Tom – thanks!) so I can explore this further when I get back into town. Saves shouldn’t be doing this. It’d be one thing were it just to HD and not to ramdisk, that’d be fine. But to ramdisk? No. Just… no. And the processor core thing, and the plugins-active-vs-not things are just odd. Maybe I can find it.

linux filesystem performance help

NEW READERS: IT’S NOT ABOUT THE FILESYSTEM ANYMORE BUT IT’S STILL BROKEN: SEE UPDATES AT BOTTOM OF POST. Addressing filesystem performance only partly fixed it. Thanks!

Since always, I’ve had latency issues on my digital audio workstation, which is running Ubuntu Linux (currently 12.04 LTS) against a Gigabyte motherboard with 4G of RAM and a suitably symmetric four-core processor. CPUs run 20%-ish in use most of the time (and all the time for these purposes), and I never have to swap.

In this configuration, I should be able to get down to around 7ms of buffer time and not get XRUNs (data loss due to buffer overrun) in my audio chain. 14ms if I want to be safe.

In reality, I can’t make it reliably at 74ms, and that has hitches I just have to live with. To get no XRUNs or close to it I have to go up to like 260ms, which is insane. I even tried getting a dedicated root-device USB card – I’ve long assumed it was some sort of USB issue. But no.

With some new tools (latencytop in particular) I have found it. It’s the file system. Specifically, it’s in the ext3’s internal transaction logging. To wit:

EXT3: committing transaction     302.9ms
log_wait_commit                  120.3ms

If I turn off read-time updating, which I tried last night, I get rid of 90% of the XRUNs, because the file system does about 90% less transaction logging to update all those inodes with new times.

But any attempt to write – well, you can guess. Even the pure realtime kernel doesn’t help; I compiled and installed a custom build of one today, but apparently this is still atomic: I get exactly the same behaviour. I may be able to live with that to some degree, because it’s a start-and-stop-of-writes thing, and as long as it doesn’t trigger during writes, I can get by.

But it’s bullshit, and it pisses me off.

I’m currently in progress of updating ext3 to ext4. I’d like to think that would solve it, given ext4’s dramatically better performance, but I have no such assurances at this point. I genuinely thought the realtime kernel might do it.

DO YOU HAVE ANYTHING YOU CAN TELL ME, DEAR INTERNETS? Particularly about filesystem tuning. Because this shouldn’t be happening; it just shouldn’t. Honestly, three tenths of a second to commit a transaction? I’ve been places where that kind of number was reasonable; it was called 1983, and I don’t live there anymore.

Anybody?

THINGS IT IS NOT:

  • Shared interrupt
  • This particular hard drive (the previous drive did it too; this one is faster)
  • ondemand CPU scheduling (i’m running in performance)
  • this particular USB port or a USB hub or extension cord or any of the sort
  • bluetooth or other random services (including search)
  • Corrupt HD
  • Old technology (it’s SATA; the drive is like six months old)
  • lack of RT kernel. I built this RT kernel today.
  • Going to be solved by installing a different operating system. Please don’t.

ETA: I got the ext3 filesystem upgraded to ext4, which made all those above numbers get dramatically smaller, but no further XRUN improvement. So I then disabled journaling, a configuration which outperforms raw ext2 in benchmarks I saw, and the machine is screamingly fast despite the RT kernel…

…and it hasn’t made one goddamn whit of difference in the remaining XRUNs. WTF, computer? WTF.

ETA2 (23:51 18 August): Okay, while screwing with the filesystem did solve many XRUN problems, there are still other XRUNs which are apparently unrelated, most notably, the master-record-enable XRUN. Even moving the project to a tmpfs RAM disk and running from there produced identical results, so I’m concluding this is an entirely separate problem.

I’ve already done pretty much everything there is to do the LinuxMusicians configuration consultation page and my setup actually passes their evaluation script. I should be golden, but I’m not. Help?

ETA3 (0:26 19 August): Every two minutes, right now, with the system mostly idle, I’m getting a burst of XRUNs. On an idle machine. But it is exactly every two minutes. And while Ardour remains on top of Top even when idle (at 10% of CPU and 13.5% of RAM), Xorg pops up just underneath it, and its CPU use spikes.

What does Xorg do every two minutes? Anybody? Seriously I have no idea.

ETA4 (13:19 19 August): ARDOUR 3 TRIGGERS SESSION SAVE EVERY TWO MINUTES BY DEFAULT. Disabling that STOPS the two-minute failures entirely. We’re back to file system adventures. Holy hell. THIS HAPPENS EVEN ON RAMDISK so it’s not filesystem or media specific. What the hell is going on here?

Return top

The Music

THE NEW SINGLE