I have no idea what Toyota or adobe are up to and why they’re funding research with a name like this, but I fucking love it. It’s science, let’s get some whimsy back in here!!
More materially:
Optimized with a small set of labeled images, our model-agnostic approach adapts to various generative architectures, including Diffusion models, GANs, and Autoregressive models.
Am I correct in understanding that this is purely a visuospatial tool, and the examples aren’t just visual by coincidence? Like, there’s no way to stretch this to text models? Very new to this interpretability approach, very impressive.
lol yeah a little. A) it said something very odd sounding like “Toyota university of Chicago”, which wtf why does Toyota have a university, and B) most labs would be hesitant to publish a paper with an extraneous clause in the title just to reference an absurd cartoon
More materially:
Am I correct in understanding that this is purely a visuospatial tool, and the examples aren’t just visual by coincidence? Like, there’s no way to stretch this to text models? Very new to this interpretability approach, very impressive.