I did! There are a few places it transcribes incorrectly, but overall I'm very impressed. Here's the first ~30 seconds:
[00:00.000 --> 00:09.000] Look, I was going to go easy on you, not to hurt your feelings, but I'm only going to get this one chance.
[00:09.000 --> 00:11.000] Something's wrong, I can feel it.
[00:11.000 --> 00:17.000] It's just a feeling I've got, like something's about to happen, but I don't know what.
[00:17.000 --> 00:21.000] If that means what I think it means, we're in trouble, big trouble.
[00:21.000 --> 00:24.000] Had to be as bananas as you say, I'm not taking any chances.
[00:24.000 --> 00:26.000] You're just one to die for.
[00:26.000 --> 00:32.000] I'm beginning to feel like a rap god, rap god. All my people from the front to the back nod, back nod.
It was doing it slowly, but hadn't got to the insane bit when I killed it to try and get it working with CUDA, so I had to do some digging and it turns out I need a version of pytorch with CUDA enabled, and so I had to go and install Anaconda, and now now conda is stuck trying to "solve" my environment to install pytorch with CUDA.
So...probably?
Pre-post edit: I can't get it to work.
I've installed pytorch with cuda via pip3, installed the nVidia toolkit and it doesn't see it:
I've wasted like an hour and a half on it now. I'm not a python dev, and don't have any ML experience so this was just for fun and now it's not anymore.
Welcome to every single Python ML project - dependency hell will quickly kill any enthusiasm one may have for trying out projects. It really feels archaic to have these issues with such cutting edge technology.
CUDA is not the problem, the problem is crappy code being released on Github where basic things like requirements.txt are missing, never mind an earnest attempt to provide details about the environment that the code was running on. This is on top of code that has lots of hard-coded references to files and directories, plus also many python libraries just breaking compatibility with each other on point releases.
I can't find a source now, but I remember reading some code where the maintainer had to change a huge chunk of code because the point change for a dependency library literally flipped either how the library handled height/width or BGR channels (I can't remember which one but it was preposterous) from the 2.5.4 to the 2.5.5 version. There is no reason for doing that - it breaks everything just for grins and giggles.
Python itself is also a problem, but that's a rant for another day. Ah, how I wish Ruby had become the defacto language of choice for ML/Deep Learning!