cardboard electric guitar

This is pretty cool – a cardboard company and Fender Guitars got together to make a one-off cardboard guitar. And sure, it’s promotional video, but it’s an cool looking end product.

I love how you can see through so much of it:

They don’t tell you much about engineering for strength, but I have some thoughts, because where you can’t see through it tells you a lot about how they made it work. You can clearly see the truss rod, common to all metal-strung guitars. But there’s also a keel added in back:

And the one through-shot of the head shows how – surprisingly – the tuning knobs may have a backplate, but they do not seem to be attached to the neck bracing. That’s just the cardboard.

I’m more curious about the sudden change in the finish between shots, right after they’ve sanded the body into the shape they want. They put a lot of something into it, and I really doubt that’s just lacquer.


Before


After

Hardeners, maybe? I don’t know. But I don’t see how it keeps the edges of its form across plays without some sort of chemical additive. And I’m really curious about the string bending being done by some of the players – doing that against that texture looks really messy, but they’ve got it going – is it just fret pressure and nothing on the board, or is the top layer filled with something transparent?

Don’t get me wrong; I’m for it. I think it’s Neat, with a capital N. Frankly, I think it’s gorgeous, and I can only imagine how little it weighs. Getting a bass built like that might be amazing, if the cardboard has been made durable enough. I’m just wanting more details on what it took to make it work.

(h/t to George P. Burdell III on Facebook for the link)

i guess that’ll teach me to use a drum machine

hey guess what

hydrogen – a linux-based drum machine – has decided that its 151 beats per minute should be much faster than ardour’s 151 beats per minute.

i gotta tell you, this is turning into a “why do i even try” week. really is.

family visit

Minion Anna had family in town for a few days; we’ve been showing them around town and also took them to see Ghostbusters, which is why I haven’t been online much. (Holtzmann is now my crazy crazy movie science girlfriend! ♥)

Playing tourist host is really quite odd; it makes your own town feel like some sort of theme park. But I’d never gone to the Chihuly Garden at Seattle Centre before, and that was really rather more interesting than I’d expected. I took a bunch of pictures, here are some of the ones I liked better.

Bigger versions at Flickr.

back to real recording

I did a lot of vocals recording Monday – some tracks I intend to keep, some more or less placeholders, scratch tracks for other vocalists – all on the new system, all at the absolutely goofy 0.7ms buffer setting, just to see if the system would actually work, being driven that hard all the time, over hours.

It does.

It hiccoughed a couple of times. Nothing involving data loss – after waking back up from screen lock mode, the audio subsystem had to be restarted by me, rather than coming back up on its own. I need to disable screen locking anyway. Once during some playback, I heard a momentary pause, though no XRUNs showed up in the status monitor, so I’m not sure what’s up with that.

I should probably, I dunno, step back a bit? Give it some margin for error? But so far, I’m not being forced to.

In other news, all 39 episodes of Revolutionary Girl Utena are legally on YouTube now. YOU HAVE NO MORE REASONS TO DELAY AND MAY START WATCHING NOW.

(Because jfc the chemistry between Utena and Anthy is smokin’ right out the goddamn gate. I do not like the series’s ending, because reasons addressed pretty directly by Avatar: The Legend of Korra, but everything up until that is amazing.)

so hey, usb chipsets totally matter

In yesterday’s post, I posed a question: do USB chipsets matter in the 2.0 environment? I had reason to suspect they might.

The answer is holy crap yes they matter they matter so much it is unbelievable.

First, let me talk about what prompted this research, so you’ll know why this matters.

On my old sound interfaces I had live monitoring in hardware, so I didn’t have a lot of need to care about latency. Since that won’t mean much to most people, I’ll explain; when recording, it’s good if you can hear yourself, in headphones. If you’re multitracking, it’s critical.

My old audio interfaces did this with direct connections in the hardware. Whatever came in the microphones also went out the headset. There are advantages to this method, but also disadvantages, in that you aren’t actually hearing what’s being recorded, just what’s being sent in the microphone jack.

But now, I have this shiny new 1818vsl, which doesn’t do hardware monitoring under Linux. Higher-level kit generally doesn’t provide that; they’re assuming you have enough computer that your computer can send back what is actually being recorded, effects and all, and that you’ll do that instead.

This means I now have to care about latency in my system. Latency is basically delay, between mic and computer, and computer and headset. And if the computer is feeding my monitor headphones, that delay matters. You want to hear yourself live, or close to it, not with, oh, a quarter second of delay or something horrible like that.

Now, the good news was that straight out of the box on Ubuntu 16.04 (the latest long-term support version), I had better, lower latency numbers on my new 1818vsl than on my old hardware, when I was using that on 12.04. I could get down to a buffer size of 256 samples, and three frames, which gave me about 30ms basic latency – roughly half what I had with my old hardware and old install. I could use it as-was.

But I couldn’t go any lower on those buffers. One more setting down, and even playback would lag. It’d be okay until the system had to do anything else, then you’d get a playback pause, or a skip, or if recording – presumably, I didn’t bother trying – lost sound. That’s unacceptable, so 30ms was the lower limit, and I wasn’t sure it was a safe lower limit.

And that’s what got me doing all that chipset research I talked about yesterday, and I ordered a new USB card (plugs into PCI sockets) based on that research. I was hoping for a couple fewer milliseconds of latency, that I wouldn’t actually even use; I just wanted a safety margin.

So that new card arrived on Sunday, with its OHCI-compliant chipset made by NEC, and I popped it into the machine and started things up with normal settings.

At first, I was disappointed, because I only saw about half a millisecond less lag, instead of the 1-2ms drop I’d hoped to see. But across tests, it was more consistent – it was always at that same number, which meant I could rely on that 30ms latency in ways I wasn’t sure I could before.

They I decided to see what would happen moving the sample buffer setting one level lower, into what had been failure mode. And the result was 1) it actually worked just fine, where it hadn’t before, and 2) when running analysis, tests showed much lower latency at that setting than with the previous USB ports.

That was an ‘oh ho‘ moment, because it implied that the 256-sample run rate was basically the spot at which the on-motherboard USB could just keep up, and trying to run faster wouldn’t actually produce any actual processing improvement. It’d try, but fail, and time out.

So I did a couple of recordings on that, and they all worked. Then I dropped it another level, until finally, I just said hell with it, let’s just set it as far down as the software will allow and see how hilariously we explode.

I just successfully recorded test tracks four times with these settings, on the new card:

0.7 milliseconds isn’t even something you think about on USB 2.0. 2.8ms, maybe, okay. I’ve seen that managed a few times before, and that’s genuinely indistinguishable from realtime/hardware monitoring. But 0.7ms?

Seriously, this is well into “…is that actually possible?” territory. I’ve never even heard of someone running over USB 2.0 at latencies this low.

So, I guess it looks like the chipset matters a whole lot. Maybe not for most applications, and maybe not in the same way as in USB 3.0 or in FireWire, were there are serious compatibility issues. But in the 2.0 world, in realtime audio, it appears that the chipset makes all the difference in the world.

And yet, I can find this nowhere online. I’m beginning to think nobody bothered until now. Certainly when I’ve asked about it, the response has “why are you on USB get firewire” or “why are you on USB get PCI” because sure I want to throw out all this hardware and start over THANKS NO.

I think USB users have been trained just to accept it and deal. But surprise! You don’t have to! You can actually get a better USB card, if your system allows it, and it’s $30 instead of $1300!

So, HELLO, OTHER SMALL-STUDIO MUSICIANS! You want a chipset that uses OHCI on the USB 1.1 level even if it’s a USB 2.0 card or later because the 1.1 layer still matters, and still gets invoked by the higher-order drivers for card management. See previous post for why that’s important.

This means avoid Intel and VIA chipsets, and look for NEC or SiS – or anything else that loads OHCI drivers and not UHCI. If you’re on Linux, you want to:

cat /proc/interrupts | grep usb

If you see “uhci_hcd” in there, you have a UHCI chipset running your USB port and getting a new USB card with an OHCI-compatible chipset (and disabling whatever’s already installed) might help you with your latency issues.

Good luck!

usb 2.0 chipsets, digital audio workstations, and linux

I’ve been trying to find out whether there’s any sort of difference between USB 2.0 cards, specifically as addresses the needs of digital audio workstations on Linux.

Very few people in linux communities seem to have addressed this question at all, and none I can find on the audio side. (Firewire, oh my gods yes – huge lists. Just not USB.)

But I did a lot (a lot) of digging, and discovered via the Linux USB kernel driver dev mailing list(!) that while there’s not much difference on the USB 2.0 side, there are important differences on the 1.1 side. These difference manifest in two different driver models. That still matters at least a little bit in 2.0, because those 1.1 drivers still get loaded.

Anyway, that difference is that there are two very different driver interface models. One is UHCI, created by Intel and used by Intel and Via, mostly. The other is OHCI, which Compaq pushed when it was still around, and Microsoft preferred; it has less intellectual-property load, and NEC, SiS, and some other makers use it. If you see a “Mac compatible” card? It’s going to be OHCI.

The OHCI model puts a lot more of the business of doing USB into hardware on the card. UHCI has the processor do that work. And while that isn’t a heavy load, it is a nonzero load, and more importantly means that UHCI chipsets require more CPU attention than OHCI chipsets, on a recurring basis. And that is something we don’t need in a digital audio workstation; there are only so many board interrupt opportunities; I want them for moving data, not servicing USB mechanics.

Once I knew that, I did more searching and found people saying how switching to a NEC chipset card had (in one case in particular) ‘saved their bacon’ specifically on their digital audio workstation. They were using ProTools on Windows, not Linux, but it was still with a USB audio interface.

The chipset used by my on-motherboard USB ports is, of course, Intel, and therefore UHCI. (And UHCI drivers are actually loaded, I checked.) There’s also an on-motherboard hub between the outside world and the one true root device; that doesn’t help anything either. So there’s a nonzero chance I’ll see improvement both from changing from UHCI to OHCI, and from moving to a true root USB device instead of a hub device. It won’t be much, but I’m only looking for a few milliseconds of latency here. And even that’s more for… reliability buffer, I suppose? Yeah. Reliability buffer, rather than pure necessity.

I’m mostly posting this 1) so I remember it and 2) so other people looking for this data can find it. HI! I can’t be the only one!

I’ll update this post if I get interesting results.

eta: INTERESTING RESULTS AHOY: CHIPSETS MATTER SO MUCH OMG. I’ll write up a post with details, post it tomorrow.

swearing in welsh and playing in mandolin

@elwoodicious responded to my button-mashing arglebargle on Twitter with, “when you swear in Welsh you know it’s serious XD” but if you might know why I would be having client timeouts against the webcache varnish, but only when using the faster segment of the network, here’s today’s/last night’s data dump.

Otherwise, I’ll be adding mandolin to “We’re Not Friends,” which I’ve been working when not working on servers. (I haven’t talked about our DNS server also deciding that the login daemon was both optional and needed to be restarted every 60 seconds, have I? No. Well, it did, I fixed that too.) But…

“We’re Not Friends” is pushing me. Not from a technical standpoint, or even from an emotional standpoint, but from a communicating that emotion standpoint. Musically speaking, it’s of about average complexity – I’ve released far more complicated material. (Particularly “Stars,” hoo boy. That thing is a tiny opera.) But…

There’s an emotional complexity here that I have to get across, and I need every part of it onboard to make it work. That’s all there is to it, but it’s subtle, and most songs only have time for one emotional tone, and I’m trying to communicate an substantial emotion tonal change in three and a half minutes. But if I can do that…

Right then, back to it.

anybody out there know wordpress internals?

Anybody know anything about what the comments page in the WordPress administrative interface might be doing to call into themes?

I’ve been trolling through the codex for a while, but hey surprise wordpress is a big project and this is a lot of code to troll. But basically, the comments page in the administrative interface takes a very long time to load (>20s) if my current desktop theme is in use.

If I switch to twentysixteen (the current standard included theme), it takes very little time – basically, an immediate load. That’s changing nothing else, and it is 100% reproducible.

My suspicion is that it’s running some sort of check against the comments contents and/or metadata. I suspect specifically something to do with the avatars, but that’s very much a guess.

This is 100% unrelated to my digital audio workstation woes – completely different machines – and is something that has been bugging me for a while. It started all at once, after we rebuilt the server following the hax0r last year.

Anybody out there with knowledge? Do I get lucky?

eta: I was chatting with mpol on the wordpress IRC channel, who found something in the theme’s functions.php that I’d looked at askance before, and it’s this filter call:

add_filter('get_comments_number', 'comment_count', 0);

And if I comment that out, suddenly we behave a lot better and I don’t see a functionality loss. Anybody know what this might even be doing? It’s line 364 here.

eta2: I know what that filter was doing now. I think without that filter their custom comments counter (which added behaviour I didn’t actually like and had worked around elsewhere) becomes redundant and I’ve commented it out entirely with no bad beahviour so far. Anyone see anything weird with comment counts?

eta3: So far this is working much better! But possibly related, and possibly not, I am still getting admin-panel connection resets at random. Reloads always work, and of course, Query Monitor is not helpful here because the reload works fine without issues or errors. All of Firefox’s explanations are wrong, and this happens under Safari too.

The connection was reset

The connection to the server was reset while the page was loading.

The site could be temporarily unavailable or too busy. Try again in a few moments.

eta4: Many super-thanks to Kirrus on Twitter who has been majorly helpful on this. I’m still seeing the connection reset, but the comment issue is cleared out and along the way, using tools he and mpol recommended, I found an assortment of bugs affecting performance in the two abandonware plugins I still run and more or less privately maintain. Also, one in my now-custom once-piano-black theme which would’ve meant White Screen of Death under PHP7. I’LL NEVER CHANGE THEMES NOW XD

eta5: For even more confusing information on the remaining problem, see this entry on Dreamwidth. Honestly, what the hell?

fit and finish

So, I’ve had my Gnome3 desktop up and running for a while (because Unity has not improved with time), and mostly things are okay! But there are small things bugging me.

My desktop, in tiny form, for reference:


Yes, that offset is intentional. The monitor mounting points don’t match.

ONE: Why aren’t these tips being clipped?


Peek-a-boo!

All the windows have them. Sometimes they’re black. So some sort of clipping isn’t. Is this because I’m using the open-source nvidia driver instead of the official one, or is something else going on?

TWO: I can’t run gnome-tweak-tool because it fails out if you don’t run pulseaudio. Is there a way around that? I suspect I might be able to solve item one if I could run item two.

THREE: I can make a link on the desktop to directories with ln -s, of course. But if I make one to Dropbox, the local-instance directory path ends up being /home/kahvi/Desktop/Dropbox instead of /home/kahvi/Dropbox, and even if I put things in the directory, and it is the right directory, Dropbox won’t sync it because the local reference at time of addition was wrong, and it never notices later so never syncs.

I can alt-F2 and type “Dropbox” and get the folder with the right local path, but that’s kind of lame. I can also pull up the Dropbox mini-app and go through a couple of menus to get there, but that’s also kind of lame. It’d be nicer if I could just click on the icon like I used to do. Or better yet, drag onto the icon, that’d be best.

None of these are really big deals, but it’d be nice to get them worked out, so if you have some tips, throw them into comments? Thanks!

less an annoyatron and more an annoyaharmonica?

Friday night, a bunch of the Lair went out to see the Seattle Symphony and Chorale do Lord of the Rings: Fellowship of the Ring, and 1) holy crow, what a marathon for the performers, I mean damn, and 2) that worked surprisingly well as an art form. Also, the soloists were great.

I know that soundtrack better than I realised, too – I kept picking up small differences in performance, mostly breathing points with winds. There’s a bit towards the end with tin whistle that I don’t know how you do without a breath, and my suspicion now is “you don’t, you do it in post.”

Of course, as soon as we got to Boromir at the Council of Elrond, the entire room exploded in laughter, as was inevitable. DAMN YOU INTERNETS

I also picked up this monster in the gift shop:

And posted on Twitter, “YAY! I’ve found a whole way to be annoying!” and then played bits from Lord of the Rings all the way home. But after playing around with it a while – really, it’s not so much an annoyatron. It’s more a harmonica with a keyboard. That maybe could still be super annoying, but it will, nonetheless, be musical.

Sadly, it’s not chromatic – it’s C-major only – but it’s more flexible than you’d expect, and you can get a bit of vibrato out of it. I have no idea what if anything I’ll ever do with it, but it’s a legit addition to the noisemaker collection.

George (the cat), though, really hates it. So I guess it’s still an annoyatron for some of us. Poor kitty. 😀

Return top

The Music

THE LATEST ALBUM